Surprised this hasn't been mentioned yet: https://www.rollingstone.com/culture/culture-news/meta-ai-users-facebook-instagram-1235221430/
Facebook and Instagram to add AI users. I'm sure that's what everyone has been begging for...
Surprised this hasn't been mentioned yet: https://www.rollingstone.com/culture/culture-news/meta-ai-users-facebook-instagram-1235221430/
Facebook and Instagram to add AI users. I'm sure that's what everyone has been begging for...
AIfu
That's gold. I like it.
Damn, HP doesn't mess around. I'm going to stop trashing them around the office.
I was about to laugh about 2020 being cyberpunk, but come to think about it 2020 was the most cyberpunk year so far with everyone stuck inside doing everything on the internet.
I will never get tired of that saltman pic.
My first memory of programming was typing in a BASIC program for hours with my older cousin into his VIC-20 from a magazine where the last, like, ten pages were nothing but hexadecimal numbers. Ended up being a Robotron clone. We played it for a while then turned off the computer and it was gone.
I loved making maps for Q3. I made so many of them, some even got rotation on some servers. The simplicity was perfect for someone like me, just brushes and shaders. When UT2K4 came out I decided to try to make maps for that, but everything was intricate 3D models, which i couldn't do, so I gave up and went back to Q3.
I even made some maps for Q1, but I think I spent most of my time trying to make mods. My favorite was an inferno gun from Battletech: Crescent Hawk's Inception, which used the rocket launcher to shoot a fireball that would stick to the target and do big damage in ticks.
Well, they used to have 700 customers.
Skimmed the paper, but i don't see the part where the game engine was being played. They trained an "agent" to play doom using vizdoom, and trained the diffusion model on the agents "trajectories". But i didn't see anything about giving the agents the output of the diffusion model for their gameplay, or the diffusion model reacting to input.
It seems like it was able to generate the doom video based on a given trajectory, and assume that trajectory could be real time human input? That's the best i can come up with. And the experiment was just some people watching video clips, which doesn't track with the claims at all.
god damn couldn't even get past the first paragraph where he describes himself, while trying to attribute it to others