this post was submitted on 01 Feb 2025
46 points (94.2% liked)

Selfhosted

41658 readers
866 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Since 2016, I've had a fileserver mostly just for backups. System is on 1 drive, RAID6 for files, and semi-annual cold backup.

I was playing with Photoprism, and their docs say "we recommend placing the storage folder on a local SSD drive for best performance." In this case, the storage folder holds basically everything but the pictures themselves such as the database files.

Up until now, if I lost any database files, it was just a matter of rebuilding them by re-indexing my photos or whatever, but I'm looking for something more robust since I'll have some friends/family using Pixelfed, Matrix, etc.

So my question is: Is it a valid strategy to keep database files on the SSD with some kind of nightly backup to RAID, or should I just store the whole lot on the RAID from the get go? Or does it even matter if all of these databases can fit in RAM anyway?

edit: I'm just now learning of ZFS caching which might be my answer.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 4 points 2 days ago (1 children)

Note: If you want to backup a DBMS, you're going to want to use some system that ensures that the backup is atomic.

[–] ch00f 1 points 2 days ago* (last edited 2 days ago) (2 children)

Can you elaborate? (learning a lot at the moment).

My thought was to just copy over the whole database directory every night at like 2am. Though some of the services do offer built-in database backup tools which I assume are designed to do what you're talking about.

[–] [email protected] 1 points 19 hours ago* (last edited 19 hours ago)

Some databases support snapshotting (which won't take the database down), and I believe that backup systems can be aware of the DBMS. I'm not a good person to ask as to best practices, because I don't admin a DBMS, but it's an issue that I do mention when people are talking about backups and DBMSes -- if you have one, be aware that a backup system is going to have to take into account the DBMS one way or another if you want to potentially avoid backing up a database in inconsistent state.

[–] [email protected] 3 points 2 days ago (2 children)

Basically, you want to shut down the database before backing up. Otherwise, your backup might be mid-transaction, i.e. broken. If it's docker you can just docker-compose down it, backup, and then docker-compose up, or equivalent.

[–] ch00f 2 points 2 days ago (1 children)

Wouldn't this require the service to go down for a few minutes every night?

[–] [email protected] 1 points 1 day ago

Yup (although minutes seems long and depending on usage weekly might be fine). You can also combine it with updates which require going down anyway.

[–] [email protected] 2 points 2 days ago (1 children)

Alternatively, if your databases are on a filesystem that supports snapshots (LVM, btrfs or ZFS for instance), you can make a snapshot of the filesystem, mount the snapshot and backup thame database from it. This will ensure the backup is consistent with itself (the backed up directory was not written to between the beginning and the end of the backup)

[–] ch00f 2 points 2 days ago (1 children)

Doesn't this just pass the issue to when the snapshot is made? If the snapshot is created mid-database update, won't you have the same problem?

[–] [email protected] 2 points 19 hours ago* (last edited 19 hours ago) (1 children)

No, because the DBMS is going to be designed to permit power loss in the middle of a write without being corrupted. It'll do something vaguely like this, if you are, for example, overwriting an existing record with a new one:

  1. Write that you are going to make a change in a way that does not affect existing data.

  2. Perform a barrier operation (which could amount to just syncing to disk, or could just tell the OS's disk cache system to place some restrictions on how it later syncs to disk, but in any event will ensure that all writes prior to to the barrier operation are on disk prior to those write operations subsequent to it).

  3. Replace the existing record. This may be destructive of existing data.

  4. Potentially remove the data written in Step 1, depending upon database format.

If the DBMS loses power and comes back up, if the data from Step #1 is present and complete, it'll consider the operation committed, and simply continue the steps from there. If Step 1 is only partially on disk, it'll consider it not committed and delete it, treat the commit as not having yet gone through. From the DBMS's standpoint, either the change happens as a whole or does not happen at all.

That works fine for power loss or if a filesystem is snapshotted at an instant in time. Seeing a partial commit, as long as the DBMS's view of the system was at an instant in time, is fine; if you start it up against that state, it will either treat the change as complete and committed or throw out an incomplete commit.

However, if you are a backup program and happily reading the contents of a file, you may be reading a database file with no synchronization, and may wind up with bits of one or multiple commits as the backup program reads the the file and the DBMS writes to it -- a corrupt database after the backup is restored.

[–] ch00f 1 points 18 hours ago

Very good to know! Thanks.