this post was submitted on 27 Feb 2024
427 points (98.2% liked)
Technology
60161 readers
3904 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
There is no monopoly. If Nvidia doesn't play it right in the coming years they won't hold on to their current position. Nvidia aren't getting into custom chips just for fun. If the major cloud providers end up using their own custom silicon, that's a major blow for Nvidia.
The point that the article makes is that NVIDIA is pressing current customers by threatening shipping delays, which is an abuse of their power
And I hope they get punished for it, but that is not the same as Nvidia having monopoly.
They have as much a Monopoly as Google has on search. Sure, there are competitors, and there is a chance that new tech might disrupt them, but they are able to abuse their market position (for example, forcing websites to use Google analytics or be penalised in search results)
I disagree. Most of the big actors in the cloud/AI space got their own silicon that they are working on which is a big enough concern for Nvidia that they are looking into providing custom solutions. If the CUDA moat breaks, Nvidia will be in a much weaker position.
The search engine landscape is completely different, although I don't think you meant that those markets are really directly comparable to be fair.
I argue that Google is suffering kind of the same issues when it comes to LLM’s. They freaked out when chat GPT dropped, and I’m pretty sure that Bard got rushed just to compete with Bing Chat
They already but AFAIK not at scale. What I believe, but that's my intuition I don't have numbers to back that up, is that Alphabet for GCP, Microsoft for Azure, Amazon for AWS and others do design their own chips, their own racks, etc but mostly do it as promotional R&D. They do want to show investors that they are acutely aware of their dependency on NVIDIA and thus try to be more resilient by having alternatives. What is still happening though is that in terms of compute per watt and thus per dollar, NVIDIA through its entire stack, both hardware (H100, A100, 40xx, etc) and software (mostly CUDA here) but also trust from CTOs, is the de facto standard. Consequently my bet is that GCP, Azure, AWS do have their custom silicon running today but it let's than 1% of their compute and they probably even provide it at a discount price to customers. It's a bit like China and their billions poured into making their own chips. Sure they are showing that they can (minus the dependency on ASML...) but at what cost? Making some chipset at equivalent performance with the state of the art is a research feat not to be downplayed but doing it at scale in a commercially competitive way is quite different.
Anyway that's just my hunch so if anybody has data to contradict that please do share.