this post was submitted on 20 Dec 2023
71 points (86.6% liked)
Technology
59983 readers
2646 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
If anything it should've been AMD. Intel is barely keeping up with the CPU competition these days.
Not really. ATI were always “G is for graphics” and built video games cards. They never really saw the potential (nor did they have the resources anyway) for GPGPU, which is why NVIDIA had a huge first-player advantage (CUDA is 16 years old, 2 years before AMD acquired ATI). When AMD bought them it was already very late.
Then AMD wanted to build cards for people to buy while NVIDIA was more than happy selling overpriced cards to crypto miners.
OpenCL was an ambitious project that was too big and too open for what was capable from the Khronos group. Vulkan was too late.
Intel could have done it but IIRC the CEO at that time (can’t remember the name) didn’t want to diversify their products after Itanium was a failure. They just doubled down on CPU.
Maybe ATI, which ended in 2010.
AMD launched ROCm in 2016, after the first AI boom of 2012, but before GANs and transformers exploded. In recent years they're better positioned in than Intel ever was.
Brother, ati radeon hd 5000 series was basically vliw architecture single board computer in gpu package, and that was before amd bought radeon
Disagree. GCN cards were incredibly compute focused.
Shit, AMD even invented HBM memory because they saw the value in ridiculously high bandwidth, dense, energy efficient memory in data centre applications. HBM is still used today in the enterprise market.
AMD's problem was that they had no money at the time and couldn't build out their software ecosystem like Nvidia could - they had to bank on just getting the ball rolling and open sourcing their efforts in the hope that others would contribute, which didn't happen to the extent that they'd have liked, especially when Nvidia with their mountains of cash could just pump out CUDA and flood universities with free GPUs to get them hooked in the Nvidia software stack.
Amd dropped the ball when it came to software and has now separated their GPU architecture so that they only have enterprise cards for data science. NVIDIA got in early and made CUDA default among all product lineups so that consumer cards could be used as entry-level cards by hobbyists. While it would’ve been nice to see more competition, the only company taking this space seriously has been nvidia.