7Sea_Sailor

joined 1 year ago
[–] [email protected] 1 points 3 months ago* (last edited 3 months ago)

Both UnraidFS and mergerFS can merge drives of separate types and sizes into one array. They also allow removing / adding drives without disturbing the array. None of this is possible with traditional RAID (or at least not without a significant time sink for re-making the array), no matter the type of RAID you use.

[–] [email protected] 2 points 3 months ago* (last edited 3 months ago)

Can confirm that there is 0 ingress or egress fees, since this is not an S3 container storage server, but a simple FTP server that also has a borg&restic module. So it simply doesnt fall into the e/ingress cost model.

[–] [email protected] 2 points 3 months ago (1 children)

Correct me if im wrong, but if you play a 1080p video on a 4k screen, that would be upscaled. If you put a 1080p video in a 4K stream, then play that 4k stream on the 4k screen, no post-processing would be applied to the video on the screen. All the upscaling happens during encoding, where you have far more control over the upscaler quality.

[–] [email protected] 10 points 3 months ago

Because using a containerization system to run multiple services on the same machine is vastly superior to running everything bare metal? Both from a security and a ease-of-use standpoint. Why wouldnt you use docker?

[–] [email protected] 1 points 3 months ago

Caddy and Authentik play very nicely together thanks to caddy forward_auth directive. Regarding acls, you'll have to read some documentation, but it shouldnt be difficult to figure out whatsoever. The documentation and forum are great sources of info.

[–] [email protected] 0 points 3 months ago

Mp4 is a video container. Do you mean mp3?

[–] [email protected] 41 points 3 months ago (16 children)

Just use yt-dlp instead of relying on websites that shove ads in your face and may do what ever they want to the files you're downloading?

[–] [email protected] 1 points 3 months ago

AdGuard Home supports static clients. Unless the instance is being used over TCP (port 53, unencrypted), it is by far the better way to use clientnames in the DNS server addresses and unblock the clients over that.

For DoT: clientname.dns.yourdomain.com
For DoH: https://dns.yourdomain.com/dns-query/clientname

A client, especially a mobile one, can simply not guarantee always having the same IP address.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago) (2 children)

If you dont fear using a little bit of terminal, caddy imo is the better choice. It makes SSL even more brainless (since its 100% automatic), is very easy to configure (especially for reverse proxying) yet very powerful if you need it, has a wonderful documentation and an extensive extension library, doesnt require a mysql database that eats 200 MB RAM and does not have unnecessary limitations due to UI abstractions. There are many more advantages to caddy over NPM. I have not looked back since I switched.

An example caddyfile for reverse proxying to a docker container from a hostname, with automatic SSL certificates, automatic websockets and all the other typical bells and whistles:

https://yourdomain.com {
  reverse_proxy radarr:7878
}
[–] [email protected] 1 points 3 months ago

The demo instance would be their commercial service I suppose: https://ente.io/. Since, as are their own words, the github code 1:1 represents the code running on their own servers, the result when selfhosting should be identical.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago) (2 children)

Theres a Dockerfile that you can use for building. It barely changes the flow of how you setup the container. Bigger issue imo is that it literally is the code they use for their premium service, meaning that all the payment stuff is in there. And I don't know if the apps even have support for connecting to a custom instance.

Edit: their docs state that the apps all support custom instances, making this more intruiging

635
submitted 5 months ago* (last edited 5 months ago) by [email protected] to c/selfhosted
 

@[email protected]

Mid 2022, a friend of mine helped me set up a selfhosted Vaultwarden instance. Since then, my "infrastructure" has not stopped growing, and I've been learning each and every day about how services work, how they communicate and how I can move data from one place to another. It's truly incredible, and my favorite hobby by a long shot.

Here's a map of what I've built so far. Right now, I'm mostly done, but surely time will bring more ideas. I've also left out a bunch of "technically revelant" connections like DNS resolution through the AdGuard instance, firewalls and CrowdSec on the main VPS.

Looking at the setups that others have posted, I don't think this is super incredible - but if you have input or questions about the setup, I'll do my best to explain it all. None of my peers really understand what it takes to construct something like this, so I am in need of people who understand my excitement and proudness :)

Edit: the image was compressed a bit too much, so here's the full res image for the curious: https://files.catbox.moe/iyq5vx.png And a dark version for the night owls: https://files.catbox.moe/hy713z.png

 

Hello SelfHosters! After getting myself a wonderfully large NAS and spending a couple days thinking about how to link up the different services, I turn to you for advice. This is my situation:

I've been operating a cheap VPS for a while now, which runs a bunch of services that require neither lots of storage nor compute (webserver, vaultwarden, gitea and so on). But I refuse to pay the price for a large capacity / powerful remote machine for stuff like Jellyfin or Immich, especially because I want these things to be available to me in the local network no matter the network state (internet drops frequently here). Therefor, I've setup a ~50TB NAS, on which I want to both store and backup larger data packets, as well as operate some storage/traffic heavy applications (Jellyfin, Immich, Nextcloud, ...).

What I'm struggling with is the networking of things. My VPS sits behind a Cloudflare Proxy, and I like it that way. All services are managed via domains and accessible from anywhere via that. I neither want nor need isolation of these services by a VPN. I want to continue this way with the new homelab, but am unable to directly expose ports on my home connection, or to get a static IP. For additional complication, traffic from these data-heavy applications cannot run through Cloudflare due to their limitations on the free plan. Finally, in a perfect world, I would be able to manage the domain names for services on the Homelab in the Nginx Container on the VPS, so that everything is centralized and I don't have separate management interfaces.

My first idea was to connect the VPS and the Homelab with a Wireguard tunnel, but since this would route traffic through Cloudflare, it wouldn't work.

network layout with a tunnel

I then read about Tailscale, and that I could link up the Homelab and VPS in a tailnet, setting up the node on the VPS as subnet router for the docker network on the homelab, which would bring me to something along these lines:

network layout with a direct connection

In a perfect world, the Nginx container on the VPS would be able to seemlessly direct traffic to both services running on the VPS and the Homelab, and data coming from the homelab would be routed directly to the client, while VPS data would continue running through Cloudflare. This would work without the client having to connect to any VPNs or mesh networks, the domain name would have to be enough.

Maybe I'm overcomplicating things. Please don't feel obligated to copy-paste guides, I'll happily read external ressources that you can recommend. I'll also provide clarifications in the comments as needed. Any pointers how you people solve this would be much appreciated.

view more: next ›