this post was submitted on 27 Feb 2024
427 points (98.2% liked)
Technology
60133 readers
2766 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
How do you imagine that would work?
Yeah, don't be unrealistic. We can't just have a group of competent individuals properly plan out how to dismantle a monopoly to allow for proper competition in the industry. If they don't hold onto their monopoly, how will we ever see technological advancements?
No really. NVIDIA's entire business is based on one main chip design. How would you brake up a company, that essentially only has one design it implements in various degrees for their products.
There is literally nothing to break up.
Ai accelerators and gaming gpu could definitely be split apart. AMD already uses different architectures for those applications and they have notably smaller engineering teams.
Raytracing could also ostensibly be spun into a separate division. That's already split quite a bit in the architecture. Then Intel, AMD and whatever other competitors pop up could license the raytracing tech stack or even buy raytracing chiplets.
Some of the software solutions like DLSS could be spun off and allowed to license to competitors.
No they can't, because all Nvidia products are similar base designs at different scales.
NVIDIA has for many years designed the main chip first, the biggest baddest of them all, used for the very highest end products. All other products are based on selecting parts of that, to make the chips cheaper for their respective markets. There is no reasonable way to split this up.
There is no monopoly. If Nvidia doesn't play it right in the coming years they won't hold on to their current position. Nvidia aren't getting into custom chips just for fun. If the major cloud providers end up using their own custom silicon, that's a major blow for Nvidia.
The point that the article makes is that NVIDIA is pressing current customers by threatening shipping delays, which is an abuse of their power
And I hope they get punished for it, but that is not the same as Nvidia having monopoly.
They have as much a Monopoly as Google has on search. Sure, there are competitors, and there is a chance that new tech might disrupt them, but they are able to abuse their market position (for example, forcing websites to use Google analytics or be penalised in search results)
I disagree. Most of the big actors in the cloud/AI space got their own silicon that they are working on which is a big enough concern for Nvidia that they are looking into providing custom solutions. If the CUDA moat breaks, Nvidia will be in a much weaker position.
The search engine landscape is completely different, although I don't think you meant that those markets are really directly comparable to be fair.
I argue that Google is suffering kind of the same issues when it comes to LLM’s. They freaked out when chat GPT dropped, and I’m pretty sure that Bard got rushed just to compete with Bing Chat
They already but AFAIK not at scale. What I believe, but that's my intuition I don't have numbers to back that up, is that Alphabet for GCP, Microsoft for Azure, Amazon for AWS and others do design their own chips, their own racks, etc but mostly do it as promotional R&D. They do want to show investors that they are acutely aware of their dependency on NVIDIA and thus try to be more resilient by having alternatives. What is still happening though is that in terms of compute per watt and thus per dollar, NVIDIA through its entire stack, both hardware (H100, A100, 40xx, etc) and software (mostly CUDA here) but also trust from CTOs, is the de facto standard. Consequently my bet is that GCP, Azure, AWS do have their custom silicon running today but it let's than 1% of their compute and they probably even provide it at a discount price to customers. It's a bit like China and their billions poured into making their own chips. Sure they are showing that they can (minus the dependency on ASML...) but at what cost? Making some chipset at equivalent performance with the state of the art is a research feat not to be downplayed but doing it at scale in a commercially competitive way is quite different.
Anyway that's just my hunch so if anybody has data to contradict that please do share.
Limit them producing PCIe cards to low volume reference models and require their software to be open source to break that aspect of the lock-in, that's the two big things. As alternative to the latter, require them to have actual platform docs, right now they're not only providing the only compiler for their cards which is deliberately incompatible with everything else they're also making sure that noone else can get performance out of NVidia cards without excessive reverse-engineering, some things are even locked down hard via firmware signing. Splitting AI off from GPU would be a bonus.
I'm all for open source, but that would basically be like confiscating and giving away that part of the company.
Something we might expect from China, but not a democratic society.
Is their product the GPU or is their product softwares?
They are basically abusing their customers into doing less with the hardware by obfuscating it's functionality.
AFAIK you don't pay extra to use CUDA or drivers, so while software is part of the ecosystem, there is no doubt the product is the hardware. When in doubt, follow the money.
It's both. Jensen himself has said they aren't a GPU company anymore, highlighting their software stack. CUDA was not built in a day.
~~CUDA was not mostly 'built' by them. It was originally built on top of technology acquired by a company called Aegia. Aegia built an ASIC and a physics engine that could run instructions for the ASIC, called "PhysX" and that team ported their toolchain to run on GPUs and other ASICs.~~
CUDA was initially released in 2007, and Aegia was acquired in 2008. It would be extremely dishonest to not say that CUDA is what it is today due to Nvidia.
I get that hating on big corpos is cool on this platform, but there's no need to warp reality just to talk smack about them.
So essentially destroying one of US' most important companies going into the future. Their chips are so highly valued that the US government are creating sanctions specifically to stop the sale of their high end chips to hostile nations. I can't imagine the US shooting themselves in the foot like that.
If you think that would destroy nvidia you're selling them quite short. Other companies in the market are following that exact business model: Don't produce your own boards, actually document the hardware / have FLOSS drivers.
If you're an nvidia fanboy making nvidia compete on a level playing field by making them play fair sounds of course like a disaster, but you come here and throw national interest into the mix. How the fuck would nvidia losing market share to AMD damage US national interest it would strengthen its standing by having independent options.
It might even enable Intel to secure their foot into the market, remember, the only one among RGB to actually produce their own chips. In the US.
Nvidia not producing their own boards wouldn't solve anything but complicate matters for Nvidia. Ask Asus or EVGA what their margins are on their Nvidia GPUs. Nvidia opening their stack to the competition was the only half realistic suggestion.
Why do you think the whole 4090 D debacle happened? The US government have obvious interests in limiting the compute power China has access too. Nobody cares about their gaming GPUs, it's the ML chips that are making the waves, and those are of obvious national interest to the US government.
Brush that chip off your shoulder, not sure what's making you so angry. And why are you bringing AMD into the picture, they aren't even the biggest threat to Nvidia's ML hegemony. I was also specifically referring to how dismantling Nvidia would be counter productive to US interests, not Nvidia's market share.
Neither AMD nor Nvidia are into the foundry business so I don't see how that's relevant. Intel is decoupling their foundry so nothing is stopping either companies from porting their chips if need be.
Nvidia could only fuck them over like that because they were able to produce their own boards: If they have to rely on board manufacturers to sell their chips, they have to be nice enough for board manufacturers to actually bother doing that.
That's not the point of contention, this is:
...and market share going to other US companies would hurt that interest in what way exactly?
AMD shmahemde. There's a gazillion US startups in the space which could make it, or not, and/or be bought up by AMD or Intel, both certainly have their eyes and products on the market. The US' national interest is hurt by nvidia's unfair business practices limiting bringing innovation to market.
Margins wouldn't change. GPUs are brand sellers, OEMs would try to make their margins on other products. E.g. if Asus were to stop producing graphic cards for Nvidia, their mindshare would plummet
You are the one bringing up market share into the discussion. I haven't said anything about Nvidia losing market share hurting US interests
You're acting as if they're the only actor in the market, but there is competition from multiple sides. You don't dismantle a company purely on them having a dominant position.
Yes, yes you do because that's a market failure. In the free market there's no monopolies, thing is the real world lacks the perfect information and perfect rationality of actors for the market to actually be free, so we have to use regulations to approximate it.