this post was submitted on 03 Aug 2023
41 points (97.7% liked)

Selfhosted

40651 readers
350 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Getting GPU acceleration working is a common task for those of us running Plex or Jellyfin. There is not much documentation for getting the NVIDIA container stack to work with Podman, even less on Gentoo, plus there have been a lot of changes to NVIDIA's container toolkit lately.

I have been fighting with Podman for a while now and just recently got it working 1:1 with my Docker setup. Gentoo may not be the most popular or easy to use distro but I documented it in case some poor soul runs across it searching the web.

Feel free to poke holes in it or leave feedback.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 1 year ago* (last edited 1 year ago) (1 children)

Absolutely stellar write up. Thank you!

I have a couple of questions.
Imagine I have a powerful consumer gpu card to trow at this solution, 4090ti for the sake of example.
- How many containers can share one physical card, taking into account total vram memory will not be exceeded?
- How does one virtual gpu look like in the container? Can I run standard stuff like PyTorch, Tensorflow, and CUDA stuff in general?

[–] [email protected] 1 points 1 year ago (1 children)

Thanks!

As I understand it, it bind-mounts the /dev/nvidia devices and the CUDA toolkit binaries inside the container, giving it direct access just as if it was running on the host. It's not virtualized, just running under a different namespace so the VRAM is still being managed by the host driver. I would think the same restrictions exist in containers that would apply for running CUDA applications normally on the host. Personally I've had up to 4 containers run GPU processes at the same time on 1 card.

And yes, Nvidia hosts it's own GPU accelerated container images for PyTorch, Tensorflow and a bunch of others on the NGC. They also have images with the full CUDA SDK on their dockerhub.

[–] [email protected] 2 points 1 year ago

That's wonderful to know! Thank you again.
I'll follow your instructions, this implementation is exactly what I was looking for.