The greatest irony would be if OpenAI was killed by an open AI
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
i'll allow it.
Not only would it be the greatest irony, it would be the best outcome for humanity. Fuck ClosedAI
I don't mind
There's no way I believe that Deepseek was made for the $5m figure I've seen floating around.
But that doesn't matter. If it cost $15m, $50m, $500m, or even more than that, it's probably worth it to take a dump in Sam Altman's morning coffee.
DeepSeek claimed the model training took 2,788 thousand H800 GPU hours, which, at a cost of $2/GPU hour, comes out to a mere $5.576 million.
That seems impossibly low.
DeepSeek is clear that these costs are only for the final training run, and exclude all other expenses
There would have been many other runs before the release version.
Red lobster went under and these guys still exist
Hey I saw a red lobster today, they’re still here. 3 kinds of shrimp for $22!
There is no downside to lying these days. Yet the public seems surprised that all they see is lying.
So many people don't even question it. Talk loud and confidently enough and that's the bar for most unfortunately.
TikTok, Instagram and similar are great examples of this, initially you think wow cool I'm seeing all of these new things and getting so much info. Then you see someone come up on a topic you know something about and the facade breaks when all they do is spew misinformation that attracts a crowd (usually via fear).
And then the hordes of sycophant DinkDonkers repeat their detritus over every comment thread they can
How about that: venture capitalists don't know what's going on in the market any more than anyone else does. They're just arrogant because they have metric shit-tons of money.
I'm sure Altman forcing out all the actual brains on the board of OpenAI, like Chief Research Scientist https://en.m.wikipedia.org/wiki/Ilya_Sutskever, has nothing to do with this or their rapidly declining lead in the field.
Corpo parasite took over before the nerd could make the company great.
Gets Cucked by the Chinese
There is some justice out there.
Id say their lead is over.
Im enjoying all the capitalist oligarchs losing their minds over Deepseek destroying the piles of money they were already counting in their heads. It's not much, but it's something.
Not just deepseek, but everyone else who is now forking R1 and training for their specific use case.
He's not wrong. He was speaking in the implied scope of capitalism. Where u can't do something that cheap because without the ability to hit a huge payout for investors noone will fund you.
We're just seeing that capitalism can't compete with an economy that can produce stuff without making investors rich.
Probably should point out that DeepSeek is owned by a Chinese hedge fund. They specialize in algorithmic trading. Can't get much more capitalist than that. Very happy to see Chinese capitalists release an open source model. No doubt Altman and cronies will seek protection under the guise of national security. I guess free market competition is only good when you are not getting your arse kicked.
Good detail added, I did not know that. But they're still doing so with a much smaller payout than if they tried to compete the way a US capitalist would through a closed source full ownership model.
Chinese economy is even more capitalist if that's possible.
Sam Altman is full of shit? Nooooooooooo
I kind of suspect this is as much about A.I. progress hitting a wall as anything else. It doesn’t seem like any of the LLMs are improving much between versions anymore. The U.S. companies were just throwing more compute (and money/electricity) at the problem and seeing small gains but it’ll be awhile before the next breakthrough.
Kind of like self-driving cars during their hype cycle. They felt tantalizingly close 10 years ago or so but then progress stalled and it’s been a slow grind ever since.
I think with a lot of technologies the first 95% is easy but the last 5% becomes exponentially harder.
With LLMs though I think the problem is conflating them with other forms of intelligence.
They're amazingly good at forming sentences, but they're unable to do real actual work.
It's called the 80/20 rule. The first 80% is the easy part.
Yeah. I really dislike this "rule" because it's commonly espoused by motivational speakers and efficiency "experts" saying you make 80% of your money from 20% of your time.
It sounds great if you've never heard it before but in practice it just means "be more efficient" and is not really actionable.
Nevertheless, like the funding-hungry CEO he is, Altman quickly turned the thread around to OpenAI promising jam tomorrow, with the execution of the firm's roadmap, amazing next-gen AI models, and "bringing you all AGI and beyond."
AGI and beyond?
If you throw billions of dollars at a problem, you will always get the most expensive solution.
...if you get a solution at all.
Artificial General Intelligence, the pipedream of a technological intelligence that is not producing a single thing but generally capable, like a human.
Edit: recommended reading is “Life 3.0”. While I think it is overly positive about AI, it gives a good overview of AI industry and innovation, and the ideas behind it. You will have to swallow a massive chunk of Musk-fanboism, although to be fair it predates Musk’s waving the fasces.
I get it. I just didn't know that they are already using "beyond AGI" in their grifting copytext.
Yeah, that started a week or two ago. Altman dropped the AGI promise too soon now he's having to become a sci-fi author to keep the con cooking.
now he's having to become a sci-fi author to keep the con cooking.
Dude thinks he’s Asimov but anyone paying attention can see he’s just an L Ron Hubbard.
The fact that Microsoft and OpenAI define Artificial General Intelligence in terms of profit suggests they're not confident about achieving the real thing:
The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits. (Source)
Given this definition, when they say they'll achieve AGI and beyond, they simply mean they'll achieve more than $100 billion in profit. It says nothing about what they expect to achieve technically.
Well that’s a pretty fucking ridiculous definition lol.
The amount of people spamming ‘deepseek’ on YouTube comments and live streams is insane. Definitely have a shit load of shadow funding
While I tend to avoid conspiracy theory type thinking, the nature of modern social makes it very easy to run astroturfing/botting campaigns. It's reasonable to be suspicious.
Bot campaigns seem pretty cheap when your business is making chat bots
Or if you have access to click farm type propoganda resources
Or you're a government with endless funds at your disposal.
It’s easy to write a bot. You just ask ~~ChatGPT~~ DeepSeek for the code.
I find the online cheerleading for AI and AGI strange. It feels like a frothing mob rooting for the unleashing of a monster at times.
I mean, a lot of it is just people who started using chatgpt to do some simple and boring task (writing an email, CV, or summarizing an article) and started thinking that it's the best thing since sliced bread.
I would know that since I'm a university student. I know the limitations of current AI stuff so I can cautiously use it for certain tasks and don't trust the output to be correct. Meanwhile, my friend thought that he was making chatgpt better at answering his multiple choice economics quiz by telling it which of the answers it gave was wrong...
IMO they're way too much fixated on making a single model AGI.
Some people tried to combine multiple specialized models (voice recognition + image recognition + LLM, + controls + voice synthesis) to get quite compelling results.
I mean I get the DeepSeek launch exposes what NVIDIA and OPENAI have been pushing as the only roadmap to AI as incorrect, but doesn't DeepSeek's ability to harness less lower quality processors thereby allow companies like NVIDIA and OPENAI to reconfigure expanding their infrastructure's abilities to push even further faster? Not sure why the selloff occurred, it's like someone got a PC to post quicker with a x286, and everybody said hey those x386 sure do look nice, but we're gonna fool around with these instead.
I believe this will ultimately be good news for Nvidia, terrible news for OpenAI.
Better access to software is good for hardware companies. Nvidia is still the world leader when it comes to delivering computing power for AI. That hasn’t changed (yet). All this means is that more value can be made from Nvidia gpus.
For OpenAI, their entire business model is based on the moat they’ve built around ChatGPT. They made a $1B bet on this idea - which they now have lost. All their competitive edge is suddenly gone. They have no moat anymore!