this post was submitted on 18 Dec 2024
14 points (93.8% liked)

Hardware

764 readers
600 users here now

All things related to technology hardware, with a focus on computing hardware.


Rules (Click to Expand):

  1. Follow the Lemmy.world Rules - https://mastodon.world/about

  2. Be kind. No bullying, harassment, racism, sexism etc. against other users.

  3. No Spam, illegal content, or NSFW content.

  4. Please stay on topic, adjacent topics (e.g. software) are fine if they are strongly relevant to technology hardware. Another example would be business news for hardware-focused companies.

  5. Please try and post original sources when possible (as opposed to summaries).

  6. If posting an archived version of the article, please include a URL link to the original article in the body of the post.


Some other hardware communities across Lemmy:

Icon by "icon lauk" under CC BY 3.0

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Alphane_Moon 2 points 3 days ago (7 children)

I can't speak for the nitty-gritty details and enterprise-scale technology, but from a consumer perspective (for local ML upscale and LLM using both proprietary and free tools), Nvidia clearly has the upper hand in terms of software support.

[–] anamethatisnt 2 points 3 days ago (6 children)

How cheap must rivalling high vram offerings be to upset the balance and move devs towards Intel/AMD?
Do you think their current platform offerings are mature enough to grab market share with "more for less" hardware or is the software support advantage just too large?

[–] Alphane_Moon 2 points 3 days ago (3 children)

From my limited consumer-level perspective, Intel/AMD platforms aren't mature enough. Try looking into any open-source/commercial ML software aimed at consumers, Nvidia is guaranteed and first class.

The situation is arguably different in gaming.

[–] anamethatisnt 2 points 3 days ago (1 children)

Thanks for the insight. Kinda sad how selfhosted LLM or ML means Nvidia is a must have for the best experience.

[–] brucethemoose 2 points 3 days ago

Only because AMD/Intel aren't pricing competitively. I define "best experience" as the largest LLM/context I can fit on my GPU, and right now that's essentially dictated by VRAM.

That being said, I get how most wouldn't want to go through the fuss of setting up Intel/AMD inference.

load more comments (1 replies)
load more comments (3 replies)
load more comments (3 replies)