this post was submitted on 30 Dec 2023
23 points (92.6% liked)

homelab

6701 readers
12 users here now

founded 4 years ago
MODERATORS
 

I have a P400 in my storage server which currently also runs some media containers like Plex, sonarr-sma, radarr-sma, Jellyfin, exploring Immich, etc. I have the GPU surfaced via docker and added it to each of the containers that needed access to the GPU for hardware acceleration needs. Is it possible to be able to leverage the Nvidia gpu container remotely (over the lan) without having the containers access it (pseudo) directly? I want to move the media handling containers to a Turing Pi 2 and keep just the GPU access on the storage server.

top 8 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 11 months ago (1 children)

that’s a negative ghost rider

[–] thisisawayoflife 2 points 11 months ago

Serious bumdiddlyummer

[–] [email protected] 2 points 11 months ago (1 children)

Not sure about docker support, but there is a gpu-over-ip implementation that supports Linux here: https://github.com/Juice-Labs/Juice-Labs

[–] thisisawayoflife 1 points 11 months ago* (last edited 11 months ago)

This is interesting, thanks!

Exit: Sadly, they don't support media encoding at all. So might still be useful for ML duties.

[–] ARNiM 1 points 11 months ago* (last edited 11 months ago) (1 children)

Not possible AFAIK, plus it will degrade the performance due to the latency etc, IMO it’s not feasible and not the best way if you want to leverage your GPU’s horsepower.

You will need to keep the transcoding in the storage server, maybe the rest (a viewer, manager etc) you can move to the Turing Pi 2.

But then, if it’s for a real time decoding, it’s not possible. Rather than getting an SBC like the Pi styled computers, consider getting something like a motherboard that has built in J4125 from Biostar which has a PCI-E slot and move your GPU to that Biostar mobo to handle all your media needs. And keep the storage server GPU-less.

[–] thisisawayoflife 2 points 11 months ago

Transcoding on download seems like the easiest use case, I could use Tdarr and have one node with the GPU. But for apps like Immich that use the GPU for both transcoding (raw to jpg?) and for ML purposes (facial recognition) I'm guessing the container will have to run on the hardware where the GPU is, which means Plex and Jellyfin will also have to follow the gpu.

I've definitely thought about moving the GPU to a dedicated mini itx box. Wonder if I could find something rack friendly..

[–] GoddessOfGouda 0 points 11 months ago (1 children)

here is a super user post about pcie virtualization, and it involves writing custom drivers.

Off the top of my head, a similar setup with transcoding comes to mind. In this case I used a shared volume mount between the media server and the transcoding server, and ssh to run ffmpeg on the remote server.

I think an easier setup would be to proxy app calls that use the gpu through ssh to your gpu container, then write the output to a volume that the non gpu host can read from.

[–] GoddessOfGouda 4 points 11 months ago

If you’re looking for transcoding, check out rffmpeg