[-] [email protected] 1 points 10 minutes ago* (last edited 9 minutes ago)

I like this version of fedora atomic with KDE

https://getaurora.dev/

[-] [email protected] 1 points 12 hours ago

If you are willing to spend a bit more upfront, I bought a mini PC in 2017 and installed opnsense on it. It's still rock solid. For wifi, I use a separate ap (a ubiquity UAP that I bought in 2015) and it is also going strong. Almost a decade of rock solid performance easily beats out any other router I've owned in terms of both performance and cost.

[-] [email protected] 1 points 12 hours ago

I have an atomic variant of fedora 40 (Aurora) and it just works on an Intel CPU with integrated graphics. I have a USB c dongle with HDMI out and it just works when I plug it in.

I also tried it on my steam deck dock the other day and it worked without issue.

[-] [email protected] 1 points 1 day ago* (last edited 1 day ago)

Thanks! Makes sense if you can't change file systems.

For what it's worth, zfs let's you dedup on a per dataset basis so you can easily choose to have some files deduped and not others. Same with compression.

For example, without building anything new the setup could have been to copy the data from the actual Minecraft server to the backup that has ZFS using rsync or some other tool. Then the back server just runs a snapshot every 5 mins or whatever. You now have a backup on another system that has snapshots with whatever frequency you want, with dedup.

Restoring an old backup just means you rsync from a snapshot back to the Minecraft server.

Rsync only needed if both servers don't have ZFS. If they both have ZFS, send and recieve commands are built into zfs are are designed for exactly this use case. You can easily send a snap shot to another server if they both have ZFS.

Zfs also has samba and NFS export built in if you want to share the filesystem to another server.

[-] [email protected] 2 points 1 day ago* (last edited 1 day ago)

I use zfs so not sure about others but I thought all cow file systems have deduplication already? Zfs has it turned on by default. Why make your own file deduplication system instead of just using a zfs filesystem and letting that do the work for you?

Snapshots are also extremely efficient on cow filesystems like zfs as they only store the diff between the previous state and the current one so taking a snapshot every 5 mins is not a big deal for my homelab.

I can easily explore any of the snapshots and pull any file from and of the snapshots.

I'm not trying to shit on your project, just trying to understand its usecase since it seems to me ZFS provides all the benefits already

[-] [email protected] 1 points 1 day ago

Start with this to learn how snapshots work

https://fedoramagazine.org/working-with-btrfs-snapshots/

Then here the learn how to make automatic snapshots with retention

https://ounapuu.ee/posts/2022/04/05/btrfs-snapshots/

I do something very similar with zfs snapshots and deduplication on. I have one ever 5 mins and save 1 hr worth then save 24 hourlys every day and 1 day for a month etc

For backup to remote locations you can send a snapshot offsite

[-] [email protected] 6 points 1 day ago

Fresh RSS if you want a self hosted option

[-] [email protected] 5 points 2 days ago

This is really amazing! In theory, can you can use 2gb with 4 different VMs?

[-] [email protected] 18 points 2 days ago

Conversely I have a dell xps from 2018 that run very well with fedora atomic (kde). I upgraded the SSD, WiFi card and replaced the battery. Should easily last me another 5 years

[-] [email protected] 4 points 2 days ago

So Fedora atomic?

There's like a dozen variants as well to suit any specialty application

[-] [email protected] 2 points 2 days ago

The proper way of doing this is to have two separate systems in a cluster such as proxmox. The system with GPUs runs certain workloads and the non GPU system runs other workloads.

Each system can be connected (or not) to a ups and shut down with a power outage and then boot back up when power is back.

Don't try hot-plugging a gpu, it will never be reliable.

Run a proxmox cluster or kubernetes cluster, it is designed for this type of application but will add a fair amount of complexity.

[-] [email protected] 2 points 6 days ago

First thing I would do is boot a live Ubuntu image from a USB. Make sure the hardware all works as expected.

49
submitted 3 weeks ago* (last edited 3 weeks ago) by [email protected] to c/selfhosted

Technically this isn't actually a seafile issue, however the upload client really should have the ability to run checksums to compare the original file to the file that is being synced to the server (or other device).

I run docker in a VM that is hosted by proxmox. Proxmox manages a ZFS array which contains the primary storage that the VM uses. Instead of making the VM disk 1TB+, the VM disk is relatively small since its only the OS (64GB) and the docker containers mount a folder on the ZFS array itself which is several TBs.

This has all been going really well with no issues, until yesterday when I tried to access some old photos and the photos would only load half way. The top part would be there but the bottom half would be grey/missing.

This seemed to be randomly present on numerous photos, however some were normal and others had missing sections. Digging deeper, some files were also corrupt and would not open at all (PDFs, etc).

Badness alert....

All my backups come from the server. If the server data has been corrupt for a long time, then all the backups would be corrupt as well. All the files on the seafile server originally were synced from my desktop so when I open the file locally on the desktop it all works fine, only when I try to open the file on seafile does it fail. Also not all the files were failing only some. Some old, some new. Even the file sizes didn't seem to consistently predict if it would work on not.

Its now at the point where I can take a photo from my desktop, drag it into a seafile library via the browser and it shows successful upload, but then trying to preview the file won't work and downloading that very same file back again shows the file size about 44kb regardless of the original file size.

Google/DDG...can't find anyone that has the same issue...very bad

Finally I notice an error in mariadb: "memory pressure can't write to disk" (paraphrased).

Ok, that's odd. The ram was fine which is what I assumed it was. HD space can't be the issue since the ZFS array is only 25% full and both mariadb and seafile only have volumes that are on the zfs array. There are no other volumes...or is there???

Finally in portainer I'm checking out the volumes that exist, seafile only has the two as expected, data and database. Then I see hundreds of unused volumes.

Quick google reveals docker volume purge which deletes many GBs worth of volumes that were old and unused.

By this point, I've already created and recreated the seafile docker containers a hundred times with test data and simplified the docker compose as much as possible etc, but it started working right away. Mariadb starts working, I can now copy a file from the web interface or the client and it will work correctly.

Now I go through the process of setting up my original docker compose with all the extras that I had setup, remake my user account (luckily its just me right now), setup the sync client and then start copying the data from my desktop to my server.

I've got to say, this was scary as shit. My setup uploads files from desktop, laptop, phone etc to the server via seafile, from there borg backup takes incremental backups of the data and sends it remotely. The second I realized that local data on my computer was fine but the server data was unreliable I immediately knew that even my backups were now unreliable.

IMHO this is a massive problem. Seafile will happily 'upload' a file and say success, but then trying to redownload the file results in an error since it doesn't exist.

Things that really should be present to avoid this:

  1. The client should have the option to run a quick checksum on each file after it uploads and compare the original to the uploaded one to ensure data consistency. There should probably be an option to do this afterwards as well as a check. Then it can output a list of files that are inconsistent.
  2. The default docker compose should be run with health checks on mariadb so when it starts throwing errors but the interface still runs, someone can be alerted.
  3. Need some kind of reminder to check in on unused docker containers.
96
submitted 1 month ago by [email protected] to c/selfhosted

Looking for a self hosted YouTube front end with automatic downloader. So you would subscribe to a channel for example and it would automatically download all the videos and new uploads.

Jellyfin might be able to handle the front end part but not sure about automatic downloads and proper file naming and metadata

29
submitted 4 months ago by [email protected] to c/[email protected]

The jellyfin app (self hosted video streaming) app on steam deck (installed via desktop mode->discovery as a flat pack) doesn't seem to register as 'playing' with the os. The screen will dim after a few mins.

I'm 'playing' the jellyfin app as a non steam game in game mode.

I know I can disable screen dimming in the settings but is there a way to have it auto detect when a video is playing and prevent the screen from dimming?

16
submitted 4 months ago by [email protected] to c/coffee

Any suggestions for roasted decaf beans I can get Canada?

30
submitted 5 months ago* (last edited 5 months ago) by [email protected] to c/selfhosted

Very solid price, the cheapest I've seen for something like this. Has anyone tried it with OPNsense or other software?

The linked thread talks about someone getting 60C load temps but the air was 37C and they are using a RJ45 DAC which are known to use lots of power.

Wondering if anyone else has experience with this. Seems like a big advancement in what's possible at a home scale for non second hand equipment.

Another article about this: https://liliputing.com/this-small-fanless-pc-is-built-for-networking-with-four-10-gbe-and-five-2-5-gb-ethernet-ports/

123
submitted 6 months ago by [email protected] to c/[email protected]

This should eventually make it's way into jellyfin. Eager to see the performance improvements.

86
submitted 6 months ago* (last edited 6 months ago) by [email protected] to c/[email protected]

Beautiful stats for Jellyfin. I just set it up in docker compose yesterday. Love it!

4
submitted 7 months ago by [email protected] to c/homeautomation

I'm wondering if I can get a device that enables zwave over Ethernet/wifi and connect that to my home assistant setup?

Basically I have a home assistant setup in my house. I want to add a few simple things to my parents place but I want it to all be on the same HA instance.

On the router in my parents place, I can install wireguard to connect it to my LAN. So now my parents network is the same as my LAN network.

I'm looking for a device that can connect to zwave and then send that info over the LAN to my home assistant. Does such a thing exist? Thanks.

5
submitted 7 months ago by [email protected] to c/homeautomation

By local control, I mean if the Z-wave hub is down will the switch still work as a dumb switch and turn the lights on/off?

This is the product I would like to get, but can't find if they allow 'dumb switch' operation. Does anyone have experience with these? https://byjasco.com/ultrapro-z-wave-in-wall-smart-switch-with-quickfit-and-simplewire-white

Thanks!

28
submitted 9 months ago by [email protected] to c/[email protected]

Starship has been stacked and is apparently ready to launch as per Musk. Waiting on FAA approval for second test flight.

2
submitted 11 months ago by [email protected] to c/[email protected]

Hi all. Just learned about NixOS a few weeks ago. I'm in the process of migrating several of my docker services to a new server that will have proxmox installed as the host and then a VM for docker.

I'm currently using alpine as the VM and it works well but one of the main goals of the migration is to use infrastructure as code as much as possible. All my docker services are docker compose files checked into a git repo that gets deployed. When I need to make a change, I update the git repo and pull down the latest docker compose.

I currently have a bunch of steps that I need to do on the alpine VM to make it ready for docker (qemu agent, NFS shares, etc).

NixOS promises to be able to do all that with a single config file and then create a immutable OS that never changes after that. That seems to follow the philosophy well for infrastructure as code and easy reproducibility.

Has anyone else tried NixOS as a docker host? Any issues you've encountered?

17
submitted 1 year ago by [email protected] to c/[email protected]

I'm just starting to upgrade my basic unraid docker to an InfraAsCode setup.

I will use unraid as Nas only. My media and backups will be on unraid, everything else on a separate proxmox VM that is running and SSD storage array for ZFS. Both the unraid and proxmox hosts share their storage via NFS. Each docker container mounts the NFS volumes as needed.

For the containers I use an alpine VM with docker. I use portainer to connect to a gitea repo (on unraid) to pull down the docker compose file.

So my workflow is, use VS code on my PC to write the compose file, commit to git, then on portainer I hit the redeploy button and it pulls the latest compose file automatically.

What's your setup?

view more: next ›

Lem453

joined 1 year ago