Selfhosted

40491 readers
698 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
1
375
submitted 2 years ago* (last edited 2 years ago) by devve to c/selfhosted
 
 

Hello everyone! Mods here 😊

Tell us, what services do you selfhost? Extra points for selfhosted hardware infrastructure.

Feel free to take it as a chance to present yourself to the community!

🦎

2
 
 

Greetings, self-hosting enthusiasts and welcome to the Selfhosted group on Lemmy! I am formerly /u/Fimeg now Casey, your tour guide through the labyrinth of digital change. As you’re likely aware, we’re witnessing a considerable transformation in the landscape of online communities, particularly around Reddit. So let’s indulge our inner tech geeks and dive into the details of this issue, and explore how we, as a self-hosting community, can contribute to the solution.

The crux of the upheaval is a policy change from Reddit that’s putting the existence of beloved third-party apps, like Reddit is Fun, Narwhal, and BaconReader, in jeopardy. Reddit has begun charging exorbitant fees for API usage, so much so that Apollo is facing a monthly charge of $1.7 million. The ramifications of these charges have resulted in an outcry from the Reddit community, leading to a number of subreddits planning to go dark in protest.

These actions have pushed many users to seek out alternative platforms, such as Lemmy, to continue their digital explorations. The migration to Lemmy is especially significant for us self-hosters. Third-party applications have long been a critical part of our Reddit experience, offering unique features and user experiences not available on the official app.

As members of the Selfhosted group on Lemmy, we’re not just bystanders in this shift - we have the knowledge, skills, and power to contribute to the solution. One of the ways we can contribute is by assisting with the archiving efforts currently being organized by r/datahoarder on Reddit. As self-hosting enthusiasts, we understand the value of data preservation and have the technical acumen required to ensure the wealth of information on Reddit is not lost due to these policy changes.

So, while we navigate this new territory on Lemmy, let’s continue to engage in productive discussions, share insights, and help to shape the future of online communities. Your decision to join Lemmy’s Selfhosted group signifies a commitment to maintain the spirit of a free and open internet, a cause that is dear to all of us.

Finally, in line with the spirit of the original Reddit post, if you wish to spend money, consider supporting open-source projects or charities that promote a free and accessible internet.

With that, let’s roll up our digital sleeves and embark on this new journey together. Welcome to the Selfhosted group on Lemmy!

P.S. Thank you to Ruud who is actively maintaining the moderation front in this community!

3
1
Upgrading to 3G broadband (self.selfhosted)
submitted 2 minutes ago by brewery to c/selfhosted
 
 

I've really landed on my feet here.

Background

Our road recently got upgraded to full fibre so I switched my ADSL supplier from 300MB to 1G (is it still ADSL?!?). I also have cable broadband at 600MB so last year bought an omada router with dual wan, then bought two EAPs and been quite happy with the speeds. My equipment includes a desktop PC as home server, and a mini PC with pihole and home assistant.

Cable broadband (virgin media) just came up to renewal so they offered me 1G at same price (£35 a month) to compete with the new speeds on my street.

The new 1G ADSL provider had incorrect info on their website so ended up on CGNAT instead of Dynamic IP. It said they have dynamic IP for 1G and 3G lines, so part of the reason I went for 1G was this, which I made clear to them. They took a while to try and fix it and were pretty poor so just for offered a 3G upgrade for £39pm and 6 months free !!!

They're coming on Monday to replace the modem \router for a 3G one. I can keep the old router (brand new 1G wifi 6 router) as a mesh.

Advice needed

Please help me figure out what I need to change to make the most of it?! I purposefully didn't go beyond 1G as was not expecting this much speed for many, many years!

If anybody knows good resources on upgrading speeds past 1G please let me know.

For my home network, do I just sell everything I have and start again? Do I just use their modem and WiFi?

Do I need to check all my wires and potentially upgrade them? How do you check the speeds if they don't have them printed?

On my home server, do I need to upgrade the network card to get the most out of it? Will it be fine if the connection to the pihole DNS is still 1G if it's only requesting addresses?

I am sorry for anyone on lower speeds seeing this with envy. I do appreciate how lucky I am.

TLDR: broadband provider messed up so got ridiculously cheap upgrade to 3G ADSL and also upgraded to 1G Cable (dual wan 4G). How do I make the most of this given my equipment is all 1G?!?

4
15
submitted 8 hours ago* (last edited 4 hours ago) by Agility0971 to c/selfhosted
 
 

I'm looking for a simple remote system monitoring and alerting tool. Nothing fancy. Do you know of any? Features:

  • monitors CPU, memory and disk space
  • can accept multiple hosts to watch
  • has some sort of alerting system
  • can be deployed as a single docker container
  • can be configured using a text file
  • configs can be imported and exported inside the docker compose file

I like uptime-kuma but it only records the uptime. Other containers I've found seemed to be overly complicated. They requires multiple docker containers for log aggregation etc...

5
 
 

I was wondering, Do you know of a limit on how many rootless conrainers can one run on a linux host?

Running fedora server, I have resources but once I pass about 15 containers podman starts to hang and crash.

I then need to manually delete the storage folder under ~./local/share/... for podman to work again.

It might be related to user ns keep-id flag.

6
 
 

Inspired by a comment on my last post.

I feel like I never have a solution that allows me to control it while also being automated to such a degree that I don’t have a huge confusing backup if I don’t do finances for days or weeks.

7
 
 

I'm down to the last few hours of discounts here. I need to get my NAS and my server onto a UPS months ago. Both are already set to come back on when power restores. We rarely have power outages and have solar panels (no house battery though), so a full outage is even rarer.

I understand that a UPS can send a shutdown signal when power is lost. Is this a universal standard or format for this? If so, what keywords should i use when searching for compatible products? My father told me to look for one with Ethernet ports. I just want to make sure everything is compatible. I go out of town occasionally and as well as preventing data loss, I also need everything to go down and come back up automatically so I don't have to call a friend, neighbor, or my spouse to go mess with stuff for me.

UPS brands considered (alternatives welcome): APC, Cyberpower

Systems protected, Synology DS 220+ & BeeLink MiniPC running Debian 12.


Also, for anyone who has helped me out previously in my self-hosted journey, thank you! Things are going great and I have a few useful docker images running various services and have set up grub btrfs snapshots to easily fix my screwups. This community has been incredibly helpful.

8
 
 

Mine is beaverhabits, just a good habit app that has come out recently.

9
3
submitted 1 day ago* (last edited 1 hour ago) by steel_moose to c/selfhosted
 
 

Hi all!

I'm stuck with a problem on my TrueNAS server. I suspect the boot drive is dead, but I don't know how to proceed to get it back up and running.

My setup:
-HP Compaq 8000 Elite SFF with 8gig ddr3
-TrueNAS-24.04.2.5
-Boot drive: Kingston SSD A400 240GB
-Data drives: 2 X Kingston SSD DC600M 960GB

Today i noticed that my Truenas was offline and I couldn't ssh into it. The nic lights were not showing any activity. After a few boot attempts I hooked it to a monitor and keyboard. No boot media detected. Then checked in the bios and the data drives are detected but not the 240GB boot drive. So I pull the boot drive out and hook it to my Thinkpad running mxLinux. I have a USB to SATA cable, that I've used before to troubleshoot drives. The drive is not showing up in Disk Manager. Output of dmesg -w when hooking up the drive is following:

[ 2369.520731] usb 2-1: new SuperSpeed USB device number 9 using xhci_hcd  
[ 2369.547175] usb 2-1: New USB device found, idVendor=2109, idProduct=0711, bcdDevice= 1.44  
[ 2369.547194] usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=3  
[ 2369.547202] usb 2-1: Product: VLI Product String  
[ 2369.547208] usb 2-1: Manufacturer: VLI manufacture String  
[ 2369.547213] usb 2-1: SerialNumber: 000000123AFF  
[ 2369.549433] usb-storage 2-1:1.0: USB Mass Storage device detected  
[ 2369.549917] usb-storage 2-1:1.0: Quirks match for vid 2109 pid 0711: 2000000  
[ 2369.550061] scsi host3: usb-storage 2-1:1.0  

Been trying to resolve this for hours with google but to no avail. Any help appreciated where to go from here.

Thanks 🙏

10
28
Should I bother with raid (self.selfhosted)
submitted 2 days ago by Dust0741 to c/selfhosted
 
 

I have a 2 bay NAS, and I was planning on using 2x 18tb HDDs in raid 1. I was planning on purchasing 3 of these drives so when one fails I have the replacement. (I am aware that you should purchase at different times to reduce risk of them all failing at the same time)

Then I setup restic.

It makes backups so easy that I am wondering if I should even bother with raid.

Currently I have ~1TB of backups, and with restics snapshots, it won't grow to be that big anyways.

Either way, I will be storing the backups in aws S3. So is it still worth it to use raid? (I also will be storing backups at my parents)

11
 
 

Right now I’m running a Late 2012 Mac mini (8 x Intel Core i7-3615QM CPU @ 2.30GHz) with a 1TB SSD, a 4TB external USB HDD and 16GB of RAM. It runs Proxmox with a VM running Docker (just Transmission-OpenVPN container right now), a VM for a Debian VS Code tunnel and a LXC container for Plex. I also have a Pi3B running PiHole and I use a Mac Studio for my personal computer (500GB SSD). I’m using Fios for a 1G fiber connection, a TP-Link router (AX3000) and two daisy chained 1G unmanaged switches (unfortunate scenario due to my small apartment), 1 near my entertainment center (Apple TV, PS5 etc) and another near my desk and the Mini/Studio/Pi.

I’d like to build a NAS server which I could also use for these services. Priorities being 4K transcoding capabilities and the drives for a NAS. I would also like to set up a WireGuard VPN so I can use VNC to my Mac and access home services when I’m away, this is done with the TP-Link router right now.

Right now I can’t decide between Intel or AMD for the CPU, buying something new to future proof or buying some older used office hardware and what I should prioritize (server or network)?

Currently I’ve got a mix of personal data in Dropbox and iCloud Drive, I’ll likely consolidate it all to iCloud and eventually my NAS and have the NAS data backed up to Backblaze as well. I’d also like to backup my Studio to multiple Time Machine backups and have them in multiple locations. My media is currently all on the external drive and nothing is super valuable, just TV and movies (removes eye patch).

I’m trying to learn Linux and some web development (mostly three.js) so I’ll setup a new VM, probably NixOS moving forward, specifically for coding and web dev learning.

I’m looking for hardware recommendations for the Proxmox NAS server and also networking equipment? I’d like to move off the TP-Link hardware and use something open source. Also any suggestions for other services to run or considerations I may have missed. For example monitoring, how to manage users/access like SSH, where to buy hardware, home services you can’t live without etc.

I know this is a broad AF post, but figured it could trigger some good discussions!

12
 
 

I make use of sharedrop.io to quickly share files between phones and computers on my LAN. Does anyone know of a self-hosted alternative, preferably containerised with Docker?

13
 
 

I currently have an HP micro server gen 8 with Xpenology with hybrid raid, which works fairly well, but I’m 2 major versions behind. I’m quite happy with it, but I-d like to have an easier upgrade process, and more options. My main use is NAS and a couple of apps. I’d like to have more flexibility, to easily have an arr suite, etc.

Considerdering the hassle of safely upgrading xpenology because of the hybrid raid (4+4+2+2 Gb HDDs) I-d like a setup which I can easily upgrade and modify.

What are my options here? What RAID options are there that easily and efficiently these these disks?

I don-t have the spare money right now to replace the 2Gb disks. Planned in the future

14
 
 

Hi!

I have an old gaming pc (i5 9400F) with 16gb of ram that has been acting as my home server with proxmox. It’s quite large and quite loud and very overpowered for what I’m using it for (home assistant, Minecraft server, some lxc containers) and a mini pc (amd 5800h with 16gb ram).

I want to sell my gaming pc, place the HDD into a NAS (and samba share my plex library), and potentially grab a low powered N100 minipc to pick up the lxc containers and home assistant that my gaming pc is running.

New to self hosting so wondering if this is a good setup or if there are any glaring issues you see with this. What is your setup?

15
26
submitted 2 days ago* (last edited 2 days ago) by one_knight_scripting to c/selfhosted
 
 

Hello there Selfhosted community!

This is an announcement of the completion of a project I've been working on. A Script for installing Ubuntu 24.04 on a ZFS RAID 10. Now, I'd like to describe why I choose to develop this and how I'd like for other people to have access to it as well. Let us start with the hardware.

Now, I am using an old host. My host in particular was originally a BCDR device that was based on a ZFS raidz implementation. Since it was designed for ZFS, it doesn't even have a RAID card, it only has an HBA anyways. So for redundancy, ZFS is a good way to go. Now, even though this was a backup appliance, it did not have root on ZFS. Instead, it had a separate harddrive for the operating system and three individual disks for the zpool. This was not my goal.

So I did a little research and testing. I looked at two particular guides (Debian/Ubuntu). Now, I performed those steps a dozens of times because I kept messing up the little things. And to eliminate the human error(that's me) I decided to just go ahead and script the whole thing.

The Github Repository I linked contains all the code needed to setup a generic ubuntu-server host using a ZFS RAID 10.

Instructions for starting the script are easy. Boot up a live cd(https://ubuntu.com/download/server). Hit CTRL+ALT+F2 to go into the shell. Run the following command:

bash <(wget -qO- https://raw.githubusercontent.com/Reddimes/ubuntu-zfsraid10/refs/heads/main/tools/install.sh)

This command does clone the repository, changes directory into it, and runs the entrypoint(sudo ./init.sh). Hopefully, this should be easy to customize to meet your needs.

More Engineering details are on the Github.

16
 
 

Hey guys,

its me again with medication assistant :D For anyone who never heard of MedAssist, it is selfhosted web application that tracks medication usage. It's main feature is to send e-mail remainder when it's time to reorder medication. I have received a great feedback and you all guys made me even more excited to spend time on this project. Honestly, I can't believe how many people even visited github page, thank you a lot! Some of you broke demo page which helped me find weak spots, so thx about that as well <3. I received some feature requests and bug issues via reddit, lemmy and github. I spent some time working on them and now I want to announce an update (still develop branch):

  • Possible to have Usage = 0
  • Filtered invalid characters on inputs
  • Reduced CPU/Memory usage by improving backend (hopefully no more crashes)
  • Rebuilt Upcoming Schedules (more simple and lightweight)
  • Added more styling to e-mail notifications

Demo is up and running again, feel free to try it or brake it. Fingers crossed there are not many bugs left. If it turns out it's stable enough, I'll merge develop to main branch and create latest release. Planning to add few more features in the next release. BREAKING CHANGE: Make sure you backup your database file (medication.db) and modify docker-compose Database path was changed (to achieve uniform path no matter what installation method was chosen), so make sure to update docker-compose with: volumes: - /path/to/database/directory:/app/medassist

Change to: volumes: - /path/to/database/directory:/app/database

Also change version tag to develop or v0.15.0.develop if you are using docker. Link directly to develop branch with new update: https://github.com/njic/medassist/tree/develop All suggestions are welcome and feel free to star the project on github <3 R---

17
 
 

I'm having issue making a container running in the network of gluetun to access the host.

In theory there's a variable for this: FIREWALL_OUTBOUND_SUBNETS
https://github.com/qdm12/gluetun-wiki/blob/main/setup/connect-a-lan-device-to-gluetun.md#access-your-lan-through-gluetun
When I include 172.16.0.0/12 I can ping the ip assigned with host-gateway but I can't curl anything.

The command just stays like this until it timesout

# curl -vvv 172.17.0.1
*   Trying 172.17.0.1:80...

I also tried adding 100.64.0.0/10 to connect to tailscale but is the same response, can ping and timedout curl.

Any other request works properly connected via the VPN configured in gluetun

Do you guys have any idea what I might be missing?

18
 
 

My little nuc server. Had to change some bios configs today so I busted out the old LCD and since it was there already, did a package update for the hell of it.

19
5
Troubleshooting immich (self.selfhosted)
submitted 2 days ago by gedaliyah to c/selfhosted
 
 
20
40
Offline Game Library? (self.selfhosted)
submitted 3 days ago by ProtecyaTec to c/selfhosted
 
 

What are y'all using for your offline game libraries? I ended up getting Resident Evil on GoG and started thinking about how I can host these on a NAS. Maybe something Dockerized?

Jellyfin for music and video

Immich for images

Audiobookshelf for Audiobooks

??? for Gaming?

21
 
 

Edit: exchanged fdroid url with github

22
 
 

cross-posted from: https://lemmy.ca/post/34005993

RomM (ROM Manager) allows you to scan, enrich, and browse your game collection with a clean and responsive interface. With support for multiple platforms, various naming schemes, and custom tags, RomM is a must-have for anyone who plays on emulators.


Release v3.6.0 · rommapp/romm

This Thanksgiving, we’re serving up 3.6.0, a hearty update stuffed with QOL improvements and bug fixes that will leave you as satisfied as a plate full of turkey with all the trimmings. 🦃

Track your game progress, completions, and star ratings under the new "Personal" tab, and use them to filter your games by "backlogged", "finished" or "100% completed". We've also moved your (and shared) notes under the same tab.

  • Display and filter games by age rating (requires a quick sync)
  • Use filename without tags or extension when matching unmatched game
  • Skip hashing games on desktop platforms for faster scans
  • Improved memory usage during 7zip decompression
  • New env variable UPLOAD_TIMEOUT allows for larger file uploads
  • Edit file exclusions for config.yml from the UI
23
 
 

Hi /c/selfhosted,

I want to introduce PdfDing to this community. PdfDing is a PDF manager and viewer that you can host yourself. It offers a seamless user experience on multiple devices. It's designed be to be minimal, fast, and easy to set up using Docker. The repo can be found here. Features include:

  • Seamless browser based PDF viewing on multiple devices
  • Dark Mode, colored themes and custom theme colors
  • Inverted color mode for reading PDFs
  • Remembers current position - continue where you stopped reading
  • SSO support via OIDC
  • Share PDFs with an external audience via a link or a QR Code
  • Shared PDFs can be password protected and access can be controlled with a maximum number of views and an expiration date
  • Automated and encrypted backups to S3 compatible storage

I would be very happy if you wold give PdfDing a try. If you like it, be sure to leave a star :)

24
25
14
submitted 3 days ago* (last edited 3 days ago) by [email protected] to c/selfhosted
 
 

I currently have a two disk-zfs mirror which has been filling up recently. So I decided to buy anothet drive, but when I started thinking about it, I was unsure on how to actually make it usable. The issue is that I have ~11Tb on the existing pool (2 12 Tb drives (a,b)) and no spare drives of that size to copy over all my data to while creating the new 3-drive pool with the same drives and the additional new drive (c).

I saw there is a way to create a "broken" pool with just two drives (a,b), while keeping the data on the remaining drive (c), then copying the data over to the pool and 'reparing' it afterwards with the new drive (c).

As I only have 11 Tb of data which would theoretically fit one disk, would I be able to:

  • keep the old pool
  • initialize the new pool with just one drive and copy over the data
  • detach one drive from the old pool, add it to the new pool (if possible, would there allready be parity data generated on this drive at that point? Wold the parity be generated in a way that would allow me to lose the other drive in the pool and recover the data from the remaining pool (drive) alone?)
  • destroy old pool, add last drive to new pool

I would be able to back up my important data, but don't have enogh space to also back up my media library which I'd like to not have to rebuild.

alternatively: anyone in Berlin wanna loan me a 12 Tb drive?

view more: next ›