this post was submitted on 01 Aug 2024
42 points (97.7% liked)

Selfhosted

40343 readers
665 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

It's fairly obvious why stopping a service while backing it up makes sense. Imagine backing up Immich while it's running. You start the backup, db is backed up, now image assets are being copied. That could take an hour. While the assets are being backed up, a new image is uploaded. The live database knows about it but the one you've backed up doesn't. Then your backup process reaches the new image asset and it copies it. If you restore this backup, Immich will contain an asset that isn't known by the database. In order to avoid scenarios like this, you'd stop Immich while the backup is running.

Now consider a system that can do instant snapshots like ZFS or LVM. Immich is running, you stop it, take a snapshot, then restart it. Then you backup Immich from the snapshot while Immich is running. This should reduce the downtime needed to the time it takes to do the snapshot. The state of Immich data in the snapshot should be equivalent to backing up a stopped Immich instance.

Now consider a case like above without stopping Immich while taking the snapshot. In theory the data you're backing up should represent the complete state of Immich at a point in time eliminating the possibility of divergent data between databases and assets. It would however represent the state of a live Immich instance. E.g. lock files, etc. Wouldn't restoring from such a backup be equivalent to kill -9 or pulling the cable and restarting the service? If a service can recover from a cable pull, is it reasonable to consider it should recover from restoring from a snapshot taken while live? If so, is there much point to stopping services during snapshots?

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 3 months ago* (last edited 3 months ago) (2 children)

Wouldn't restoring from such a backup be equivalent to kill -9 or pulling the cable and restarting the service?

Disclaimer: Not familiar with Immich, but this is what I've experienced generally.

AFAIK, effectively yes. The only thing you might lose is anything in memory that hasn't been written to disk at the time the snapshot was taken (which is still effectively equivalent to kill -9).

At work, we use Veeam which is snapshot based, and database server restores (or spinning up a test DB based off of production) work just fine. That said, we still take scheduled dumps/backups of the database servers just to have known-good states to roll back to if ever the need arises.

[–] [email protected] 2 points 3 months ago* (last edited 3 months ago) (1 children)

Thanks for validating my reasoning. And yeah, this isn't Immich-specific, it would be valid for any process and its data.

[–] [email protected] 2 points 3 months ago (1 children)

What i have seen for corporate server is when backup is started the database goes into a different mode, and a temp writable partition is used while readonly database is backed up, at end of backup that blob created is also stored.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago) (1 children)

Yeah if you're making a backup using the database system itself, then it would make sense for it do something like that if it stays live while backing up. If you think about it, it's kinda similar to taking a snapshot of the volume where an app's data files are while it still runs. It keeps writing as normally while you copy the data from the snapshot, which is read-only. Of course there's no built-in way to get the newly written data without stopping the process. But you could get the downtime to a small number. 😄

[–] gedhrel 2 points 3 months ago (1 children)

The other thing to watch out for is if you're splitting state between volumes, but i think you've already ruled that out.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago)

Oh yeah, that would be a disaster. If not handled correctly.

[–] gedhrel 2 points 3 months ago (1 children)

I'd be cautious about the "kill -9" reasoning. It isn't necessarily equivalent to yanking power.

Contents of application memory lost, yes. Contents of unflushed OS buffers, no. Your db will be fsyncing (or moral equivalent thereof) if it's worth the name.

This is an aside; backing up from a volume snapshot is half a reasonable idea. (The other half is ensuring that you can restore from the backup, regularly, automatically, and the third half is ensuring that your automated validation can be relied on.)

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago)

Contents of application memory lost, yes. Contents of unflushed OS buffers, no. Your db will be fsyncing (or moral equivalent thereof) if it's worth the name.

Good point. I guess kill -9 is somewhat less catastrophic than a power-yank. If a service is written well enough to handle the latter it should be able to handle the former. Should, subject to very interesting bugs that can hide in the difference.

This is an aside; backing up from a volume snapshot is half a reasonable idea. (The other half is ensuring that you can restore from the backup, regularly, automatically, and the third half is ensuring that your automated validation can be relied on.)

I'm currently thinking of setting up automatic restore of these backups on the off-site backup machine. That is the backups are transferred to the off-site machine, restored to the dirs of the services, then the services are started. This should cover the second half I think. Of course those services can't be used to store new data because they'll be regularly overwritten with every backup. In the event of a hard snafu where the main machine disappears, I could stop the auto restore on the off-site machine and start using the services from it, effectively making it the main machine. If this turns out to be reasonable and working, I might trash all of the file-based backup-and-transfer mechanisms and switch to ZFS send/recv. That should allow to shrink the data delta between main and off-site to minutes instead of hours or days. Does this make any sense?