this post was submitted on 29 Jul 2023
16 points (86.4% liked)

Linux

48655 readers
438 users here now

From Wikipedia, the free encyclopedia

Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).

Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.

Rules

Related Communities

Community icon by Alpár-Etele Méder, licensed under CC BY 3.0

founded 5 years ago
MODERATORS
 

Yeah another post about backups, but hear me out.

I read most of the other post here on lemmy, read through the documentation from different backup tools (rsync, Borg, timeshift) but all those backup tools are for "static" files.

I mean I have a few docker container with databases, syncthing to sync files between server, Android, Desktop and Mac, a few samba shares between Server, Mac and Desktop.

Per say on Borg's documentation:

  • Avoid running any programs that might change the files.
  • Snapshot files, filesystems, container storage volumes, or logical volumes. LVM or ZFS might be useful here.
  • Dump databases or stop the database servers.
  • Shut down virtual machines before backing up their images.
  • Shut down containers before backing up their storage volumes.

How I'm supposed to make a complete automated backup of my system if my files are constantly changing ? If I have to stop my containers, shutdown syncthing and my samba shares to make a full backup, that seams a bit to much of friction and prone to errors...

Also, nowhere I could find any mention on how to restore a full backup with a LVM partition system on a new installed system. (User creation, filesystem partition...)

Maybe, I have a bad understanding on how It works with linux files but doing a full backup this way feels unreliable and prone to corrupted files and backup on a server.

VMs are easier to rollback with snapshots and could't find a similar way on a bare metal server...

I hope anyone could point me to the right direction, because right now I have the feeling I can only backup my compose-files and do a full installation and reconfiguration, which is supposed to be the work of a backup... Not having to reconfigure everything !

Thanks

all 7 comments
sorted by: hot top controversial new old
[–] fubo 6 points 1 year ago

What are you trying to accomplish? Is a filesystem-level backup really the thing you need for those services, or do you need something more like a database dump or replica?

[–] GustavoM 2 points 1 year ago

...what? All I need is a single bash script (and less than 5 minutes) to recover all my previous stuff back.

[–] UnfortunateShort 2 points 1 year ago

All you need is restic, rclone, snapper and btrfs. Snapper and btrfs work almost on their own, rclone helps you connect to remote/cloud storage and restic enables very straightforward backups.

As for restic, you just need to mind that it creates lock-files, so if there's an unexpected shutdown, you might need to delete them manually. The advantage is that you can access one repo from multiple machines. And don't forget to run a cleanup command in your script (in case you automate it), because there are no automatic ones.

rclone is usually rather easy to use, but your mileage may vary depending on the type of storage you want to use. Especially if you want to tune for maximum performance.

[–] Anonymouse 1 points 1 year ago

After all the posts about backups, I started looking at mine with a more critical eye and discovered ways that it's lacking. I am using duplicity because of the number of backends (I'm using rsync), my ancient NAS has a module for an rsync server, it can do incremental, it can encrypt the backups and it is available on all my distros' package managers.

I am excluding files from packages that haven't changed and other things that can be downloaded, like Docker images. I've used it a few times to restore a misplaced "sudo rm -rf . " in a subdir of home with success! But, I realized that a full restore will be time consuming and difficult because I don't know my LVM structure, installed packages, etc.

I call duplicity with a script via cron, so I am updating it to dump installed packages, LVM info, sfdisk structure, LUKS headers and fstab into a "config" backup. I'll have another backup of everything else in another backup archive. My plan is to boot from a USB disk, restore the config backup to a RAM disk, format the drives, apply the LVM structure, set up LUKS from the saved config info, mount the disks to restore via the saved fstab, then use the package manager to install the packages from the config file, then restore the backup on top of that.

It's a little more work, but I'm hoping that the backups will he small enough that I don't need to buy more drives to keep my backups.

I do have a mysql database that I dump to a backup file which gets scooped up in the drive backup, so I don't need to take the DB offline and I have my containers' volumes on a btrfs disk, so I can just take a snapshot and back that up. I haven't updated the script for that yet, but it's currently working with LVM snapshots.

HTH, pray you never really need the backup!

[–] bleph 1 points 1 year ago* (last edited 1 year ago)

For a Dropbox-y experience (albeit slow) I have been using SpiderOak. I wanted an E2E encrypted cloud backup and it fits the bill (Snowden Approved™). I can't help feeling I could get faster and cheaper results with something hand-rolled but... time

[–] [email protected] 1 points 1 year ago

You mentioned it, but snapshots are the key. Lvm, btrfs, zfs. Take a snapshot, back it up, delete it.