I didn't know about zfs send receive, Ill take a look at that as the data is indeed compressible.
irreducible12302
I think buying a larger drive would indeed be the safest option.
Yeah exactly, I think I'll just upgrade the 1TB drive.
That would indeed solve the problem. Although I'm a bit hesitant as I do like the peace of mind when having an additional copy. Otherwise I only have two real copies of my data.
I guess as the two drives are in a zfs mirror, restoring could still be done locally without pulling data from blackblaze. If I want to revert my data back to a previous state, I can't do it with a local copy, so I would have to pull quite some data from blackblaze.
If you're looking for just filesharing capabilities, OwnCloud might be a good alternative. It's recently been rewritten in Go and its supposedly quite fast. I must say that I haven't used it myself.
Another alternative could be Nextcloud, but that comes with a whole suite of other software such as calendar, contacts, ...
I've also heard some good things about seafile, but never used it myself either.
I personally use a script which stops all containers, rsyncs the bind mounts (normal folders on the filesystem) and then restarts them. It runs every night so it isn't a problem that services are down for a few minutes.
Ideally, you would also make a database dump instead of just backing up the bind mounts.
A few things I haven't seen in the comments which I'm really happy with are:
- Vikunja
- Scrutiny
- Kimai
Other things which were mentioned already:
- Searx-ng
- Gitea
- Paperless-ngx
- Calibre-web
- Matrix (synapse)
- Nextcloud
- Jellyfin
- Uptime Kuma
- Mealie
- FreshRSS
- Vaultwarden
- Photoprism
All running in docker containers behind a caddy proxy.
How I have solved it for now is to keep using the current system but as a large part of the data is already a backup of devices around the house, those will only get backed up to blackblaze and not to the 1TB drive. All other files and server configs can then be stored in on that drive.