this post was submitted on 18 Dec 2024
14 points (93.8% liked)
Hardware
759 readers
248 users here now
All things related to technology hardware, with a focus on computing hardware.
Rules (Click to Expand):
-
Follow the Lemmy.world Rules - https://mastodon.world/about
-
Be kind. No bullying, harassment, racism, sexism etc. against other users.
-
No Spam, illegal content, or NSFW content.
-
Please stay on topic, adjacent topics (e.g. software) are fine if they are strongly relevant to technology hardware. Another example would be business news for hardware-focused companies.
-
Please try and post original sources when possible (as opposed to summaries).
-
If posting an archived version of the article, please include a URL link to the original article in the body of the post.
Some other hardware communities across Lemmy:
- Augmented Reality - [email protected]
- Gaming Laptops - [email protected]
- Laptops - [email protected]
- Linux Hardware - [email protected]
- Mechanical Keyboards - [email protected]
- Microcontrollers - [email protected]
- Monitors - [email protected]
- Raspberry Pi - [email protected]
- Retro Computing - [email protected]
- Single Board Computers - [email protected]
- Virtual Reality - [email protected]
Icon by "icon lauk" under CC BY 3.0
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The CUDA moat is pretty deep, but the primitive are starting to solidify and almost no one uses CUDA directly. Increasingly popular libraries are going multi-backend (thanks Apple silicon).
My guess is that as soon as cheap accelerators with LARGE memory banks hit the market, the libraries will support whatever API those need and CUDA dominance will be essentially shattered forever.
But we are not there yet because making good numerical hardware is fucking hard.