this post was submitted on 12 Feb 2024
164 points (98.8% liked)

Linux

8069 readers
57 users here now

Welcome to c/linux!

Welcome to our thriving Linux community! Whether you're a seasoned Linux enthusiast or just starting your journey, we're excited to have you here. Explore, learn, and collaborate with like-minded individuals who share a passion for open-source software and the endless possibilities it offers. Together, let's dive into the world of Linux and embrace the power of freedom, customization, and innovation. Enjoy your stay and feel free to join the vibrant discussions that await you!

Rules:

  1. Stay on topic: Posts and discussions should be related to Linux, open source software, and related technologies.

  2. Be respectful: Treat fellow community members with respect and courtesy.

  3. Quality over quantity: Share informative and thought-provoking content.

  4. No spam or self-promotion: Avoid excessive self-promotion or spamming.

  5. No NSFW adult content

  6. Follow general lemmy guidelines.

founded 1 year ago
MODERATORS
all 21 comments
sorted by: hot top controversial new old
[–] cbarrick 38 points 9 months ago (1 children)

After two years of development and some deliberation, AMD decided that there is no business case for running CUDA applications on AMD GPUs. One of the terms of my contract with AMD was that if AMD did not find it fit for further development, I could release it. Which brings us to today.

From https://github.com/vosen/ZLUDA?tab=readme-ov-file#faq

[–] woelkchen 7 points 9 months ago (2 children)

I'm really curious who at AMD thought it to be a great idea to develop a CUDA compatibility layer but not to release it. As stated, the release was only made because AMD ended financial support.

[–] [email protected] 1 points 9 months ago* (last edited 9 months ago) (1 children)

The problem is that if we make CUDA the standard, then they put nVidia in control of a standard. nVidia could try to manipulate the situation in future versions of CUDA by reworking it to fuck with this implementation, giving AMD a shaky name in the space.

We saw this happen with Wine, where although probably not deliberately, MS made Windows compatibility a moving and very unstable target.

That is something tolerable by open source communities, but isn't something that will fly for official support.

[–] woelkchen 1 points 9 months ago (1 children)

The problem is that if we make CUDA the standard, then they put nVidia in control of a standard. nVidia could try to manipulate the situation in future versions of CUDA by reworking it to fuck with this implementation, giving AMD a shaky name in the space.

I get that but why woulde they fund development of ZLUDA for two years?

[–] [email protected] 2 points 9 months ago

Reverse engineering CUDA can bring other benefits. It allows AMD to see what nVidia is doing right and potentially implement it in their own tech. Having not only documentation but a working implementation can help wonders in this regard.

Or maybe they did want to use it but was scared of getting SLAPPed by Nvidia, so instead let the dev open source it.

[–] [email protected] 1 points 2 months ago

Probably a way to save face and not have AMD directly do it.

[–] [email protected] 24 points 9 months ago* (last edited 9 months ago)

Basically it means that AMD is now a possible contender for the rather large market of basically scientific researchers and private industry who have CUDA based/oriented software to do 'AI' driven development or research on huge banks of GPUs.

Probably this initial implementation still has some kinks to iron out, but it could eventually result in Nvidia not having a functional monopoly in that market.

Also its neat from a hobbyist perspective if youre looking to do some kind of small version of CUDA based stuff along the same lines.

[–] JustUseMint 14 points 9 months ago (1 children)

Another common AMD W. So glad I got away from Nvidia. This will help my local work with LLMs nicely.

[–] [email protected] 13 points 9 months ago (1 children)

I'd say it's more like they're failing upwards. It's certainly good for AMD, but it seems like it happened in spite of their involvement, not because of it:

For reasons unknown to me, AMD decided this year to discontinue funding the effort and not release it as any software product. But the good news was that there was a clause in case of this eventuality: Janik could open-source the work if/when the contract ended.

AMD didn't want this advertised or released, and even canned this project despite it reaching better performance than the OpenCL alternative. I really don't get their thought process. It's surreal. Do they not want to support AI? Do they not like selling GPUs?

[–] woelkchen 7 points 9 months ago* (last edited 9 months ago)

I really don’t get their thought process. It’s surreal.

Maybe they see it as something that would undermine their effords in increasing ROCm/HIP adoption? (But why fund its development for two years then? I agree with you: It all seems so weird!)

[–] [email protected] 12 points 9 months ago* (last edited 9 months ago) (3 children)

Can someone please explain like I’m five what the meaning and impact of this will be? Past posts and comments don’t seem to be very clear. As someone who uses both Linux and macOS professionally for design, this could be a massive game changer for me.

[–] aodhsishaj 22 points 9 months ago (3 children)

If you already have a cuda workflow and want to use an AMD card, you can do that with this library.

[–] [email protected] 10 points 9 months ago

That includes stuff like Stable Diffusion that recommended nvidia cards because it uses CUDA to accelerate image generation?

[–] [email protected] 2 points 9 months ago (1 children)

So does it work with off the shelf software or is it something the developer has the patch in?

[–] woelkchen 2 points 9 months ago

So does it work with off the shelf software or is it something the developer has the patch in?

The point of a drop-in replacement is that no patching is required but in reality the software was released in incomplete form.

[–] [email protected] -2 points 9 months ago* (last edited 9 months ago) (1 children)

ok, I get that much. what I’d like to know, if you’re willing to explain: what’s it going to be like deploying that on, say, a Mac workstation? a pop_os workstation? (edit: such as: how, can I on macOS, will I work with after effects, etc.)

thanks for your time

[–] laughterlaughter 2 points 9 months ago

Your question is legitimate, but chances are that you will need to find the answers yourself by reading the docs.

[–] [email protected] 11 points 9 months ago (1 children)

CUDA is when a program can use the NVIDIA GPU in addition to the CPU for some complicated calculations. AMD now made it possible to use their cards for it too.

[–] [email protected] 3 points 9 months ago* (last edited 9 months ago) (1 children)

I know what CUDA does (as someone who likes rendering stuff, but with AMD cards, I’ve missed it). I’m trying to figure out, realistically, how I can easily deploy and make use of it on my linux and Mac workstations.

the details o’ve come across lately have been a bit… vague.

edit: back when I was in design school, I heard, “when Adobe loves a video card very much, it will hardware accelerate. We call this ‘CUDA’."

[–] [email protected] 1 points 9 months ago

You can't use it with programs that aren't specifically coded to use it. Outside of hash cracking, AI training and crypto mining, few programs are.

If you mean from a developer perspective, you need to download the CUDA libraries and read through the documentation.