Do most of my work on nfs, with zfs backing on raidz2, send snapshots for offline backup.
Don't have a serious offsite setup yet, but it's coming.
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
Do most of my work on nfs, with zfs backing on raidz2, send snapshots for offline backup.
Don't have a serious offsite setup yet, but it's coming.
Github for projects, Syncthing to my NAS for some config files and that's pretty much it, don't care for the rest.
Restic since 2018, both to locally hosted storage and to remote over ssh. I've "stuff I care about" and "stuff that can be relatively easily replaced" fairly well separated so my filtering rules are not too complicated. I used duplicity for many years before that and afbackup to DLT IV tapes prior to that.
ZFS send / recieve and snapshots.
Does this method allow to pick what you need to backup or it's the entire filesystem?
It allows me to copy select datasets inside the pool.
So I can choose rpool/USERDATA/so-n-so_123xu4 for user so-n-so. I can also choose copy copy some or all of the rpool/ROOT/ubuntu_abcdef, and it's nested datasets.
I settle for backing up users and rpool/ROOT/ubuntu_abcdef, ignoring the stuff in var datasets. This gets me my users home, roots home, /opt. Tis all I need. I have snapshots and mirrored m2 ssd's for handling most other problems (which I've not yet had).
The only bugger is /boot (on bpool). Kernel updates grown in there and fill it up, even if you remove them via apt... because snapshots. So I have to be careful to clean it's snapshots.
I use rsync to an external drive, but before that I toyed a bit with pika backup.
I don't automate my backup because i physically connect my drive to perform the task.
Machine A:
Machine B:
Most of my data is backed up to (or just stored on) a VPS in the first instance, and then I backup the VPS to a local NAS daily using rsnapshot (the NAS is just a few old hard drives attached to a Raspberry Pi until I can get something more robust). Very occasionally I'll back the NAS up to a separate drive. I also occasionally backup my laptop directly to a separate hard drive.
Not a particularly robust solution but it gives me some piece of mind. I would like to build a better NAS that can support RAID as I was never able to get it working with the Pi.
When I do something really dumb I typically just use dd to create an iso. I should probably find something better.
Restic to Synology nas, Synology software for cloud backup.
Good ol' fashioned rsync once a day to a remote server with zfs with daily zfs snapshot (rsync.net). Very fast because it only need to send changed/new files, and saved my hide several times when I need to access deleted files or old version of some files from the zfs snapshots.
Periodic backup to external drive via Deja Dup. Plus, I keep all important docs in Google Drive. All photos are in Google Photos. So it's only my music really which isn't in the cloud. But I might try upload it to Drive as well one day.
Restic with deja dupe gui
Vorta + borgbase
The yearly subscription is cheap and fits my storage needs by quite some margin. Gives me peace of mind to have an off-site back up.
I also store my documents on Google Drive.
I use Pika backup, which uses borg backup under the hood. It's pretty good, with amazing documentation. Main issue I have with it is its really finicky and is kind of a pain to setup, even if it "just works" after that.
Can you restore from it? That’s the part I’ve always struggled with?
The way pika backup handles it, it loads the backup as a folder you can browse. I've used it a few times when hopping distros to copy and paste stuff from my home folder. Not very elegant, but it works and is very intuitive, even if I wish I could just hit a button and reset everything to the snapshot.
I use timeshift. It really is the best. For servers I go with restic.
I use timeshift because it was pre-installed. But I can vouch for it; it works perfectly, and let's you choose and tweak every single thing in a legible user interface!
Anything important I keep in my Dropbox folder, so then I have a copy on my desktop, laptop, and in the cloud.
When I turn off my desktop, I use restic to backup my Dropbox folder to a local external hard drive, and then restic runs again to back up to Wasabi which is a storage service like amazon's S3.
Same exact process for when I turn off my laptop.. except sometimes I don't have my laptop external hd plugged in so that gets skipped.
So that's three local copies, two local backups, and two remote backup storage locations. Not bad.
Changes I might make:
I used seafile for a long time but I couldn't keep it up so I switched to Dropbox.
Advice, thoughts welcome.
I actually move my Documents, Pictures and other important folders inside my Dropbox folder and symlink them back to their original locations
This gives me the same Docs, Pics, etc. folders synced on every computer.
Either an external hard drive or a pendrive. Just put one of those in a keychain and voila, a perfect backup solution that does not need of internet access.
...it's not dumb if it (still) works. :^)
I use duplicity to a drive mounted off a Pi for local, tarsnap for remote. Both are command-line tools; tarsnap charges for their servers based on exact usage. (And thanks for the reminder; I'm due for another review of exactly what parts of which drives I'm backing up.)
I run Openmediavault and I backup using BorgBackup. Super easy to setup, use, and modify
I've got a smb server setup with a 12tb server drive. Anything important gets put on there
Edit: fixed spelling
I use Timeshift for daily, weekly, monthly rsync backups. Then I create image backups using Clonezilla every month or two. I try to follow the 3-2-1 principal (3 backups, 2 mediums, 1 offsite) - local computer, external drive, Google Cloud.
A separate NAS on an atom cpu with btrfs of raid 10 exposed over NFS.
zfs snap
and zfs send
to an external or another server.