Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
I'm curious what you're doing with frigate/ how you're doing it without a graphics card?
I've been using it for object detection, but i had to install it on my workhorse because my server doesn't have a graphics card. I suppose it doesn't need one if you're not doing ml processing, but I'm still curious
I'm sure it depends on your workload, but I've been running object detection just fine off the igpu on my i5-8600. I think the key is to ensure the frame size isn't unnecessarily large for object detection.
cool, i might try it
I wanted to get a dedicated card for video transcription anyway, but it's good to know I don't necessarily need it.
It's well worth it to get a $50 coral tpu for object detection. Fast inference speed and nearly zero CPU usage.
I have a similar setup and it’s been a huge pain when I when I have to do the OS updates.
The Coral needs a dkms module, but the sources and Google’s own documentation for it are out of date. I would highly recommend using the iGPU for inference.
How efficient is using a GPU? I understood the efficiency wasn't nearly as good, but that may have been info from a while back.
I am currently migrating away from my 6th gen i5 to a newer N100.
Speed wise, it was about the same as the coral, about 6-8ms on the i5.
I thought about it, but I have a couple other services that could benefit from getting dedicated gpu anyway. Might as well just save for a proper PCIE card.
Frigate is currently running on a raspberry pi 4 with a USB coral tpu, it runs great, I only have 2 cameras currently though...
If the raspberry pi can handle it, I've no doubt this server will be more than capable.