this post was submitted on 01 Feb 2025
46 points (94.2% liked)

Selfhosted

41658 readers
866 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Since 2016, I've had a fileserver mostly just for backups. System is on 1 drive, RAID6 for files, and semi-annual cold backup.

I was playing with Photoprism, and their docs say "we recommend placing the storage folder on a local SSD drive for best performance." In this case, the storage folder holds basically everything but the pictures themselves such as the database files.

Up until now, if I lost any database files, it was just a matter of rebuilding them by re-indexing my photos or whatever, but I'm looking for something more robust since I'll have some friends/family using Pixelfed, Matrix, etc.

So my question is: Is it a valid strategy to keep database files on the SSD with some kind of nightly backup to RAID, or should I just store the whole lot on the RAID from the get go? Or does it even matter if all of these databases can fit in RAM anyway?

edit: I'm just now learning of ZFS caching which might be my answer.

top 50 comments
sorted by: hot top controversial new old
[–] mlg 2 points 20 hours ago (1 children)

I have run photoprism straight from mdadm RAID5 on some ye olde SAS drives with only a reduction in the indexing speed (About 30K photos which took ~2 hours to index with GPU tensorflow).

That being said I'm in a similar boat doing an upgrade and I have some warnings that I have found are helpful:

  1. Consumer grade NVMEs are not designed for tons of write ops, so they should optimally only be used in RAID 0/1/10. RAID 5/6 will literally start with a massive parity rip on the drives, and the default timer for RAID checks on Linux is 1 week. Same goes for ZFS and mdadm caching, just proceed with caution (ie 321 backups) if you go that route. Even if you end up doing RAID 5/6, make sure you get quality hardware with decent TBW, as sever grade NVMEs are often triple in TBW rating.
  2. ZFS is a load of pain if you're running anything related to Fedora or Redhat, and the performance implications from lots and lots of testing is still arguably inconclusive on a NAS/Home lab setup. Unless you rely on the specific feature set or are making an actual hefty storage node, stock mdadm and LVM will probably fulfill your needs.
  3. Btrfs has all the features you need but is a load of trash in performance, highly recommend XFS for file integrity features + built in data dedup, and mdadm/lvm for the rest.

I'm personally going with the NVME scheduled backups to RAID because the caching just doesn't seem worth it when I'm gonna be slamming huge media files around all day along with running VMs and other crap. For context, the 2TB NVME brand I have is only rated for 1200 TBW. That's probably more then enough for a file server, but for my homelab server it would just be caching constantly with whatever workload I'm throwing at it. Would still probably last a few years no issues, but SSD pricing has just been awful these past few years.

On a related note, Photoprism needs to upgrade to Tensorflow 2 so I don't have to compile an antiquated binary for CUDA support.

[–] ch00f 2 points 18 hours ago

Thanks for the tips. I'll definitely at least start with mdadm since that's what I've already got running, and I've got enough other stuff to worry about.

Are you worried at all about bit rot? I hear that's one drawback of mdadm or raid vs. zfs.

Also, any word on when photoprism will support the Coral TPU? I've got one of those and haven't found much use for it.

[–] [email protected] 8 points 1 day ago (1 children)
  1. You don't need zfs cache. Stay away from it. This isn't going to help with what you want to do anyway. Just have enough RAM.

  2. You need to backup your stuff. Follow the 3-2-1 rule. RAID is not a backup.

  3. Don't use hardware raids, there are many benefits to using software these days.

With that said, let's dig into it. You don't really need NVMe drives tbh. SATA is probably going to be sufficient enough here. With that said, having mirrored drives will be sufficient enough as long as you are backing up your data. This also depends on how much space you will need.

I just finished building out my backup and storage solution and ended up wanting NVMe drives for certain services that run. I just grabbed a few 1 TB drives and mirrors them. Works great and I do get better performance, even with other bottlenecks. This is then replicated to another server for backup and also to cloud backup.

You also haven't said what hardware you are currently using or if you are using any software for the raid. Are you currently using zfs? Unraid? What hardware do you have? You might be able to use a pice slot to install multiple NVMe drives in the same slot. This requires bifurcation though.

[–] ch00f 2 points 1 day ago (1 children)

Current hardware is an ancient fanless motherboard from 2016. RAID6 is through mdadm. Four of the drives are through a super slow PCIe 2.0 1x card.

New motherboard (just ordered) is a supermicro H13SAE-MF which has dual nvme slots and a built in raid controller for them.

[–] [email protected] 2 points 1 day ago (1 children)

Any reason why that board? Not 100% sure what you are trying to do, but it seems like an expensive board for a home NAS. I feel like you could get more value with other hardware. Again, you don't need a raid controller these days. They are a pain to deal with and provide less protection when compared to software raid these days. It looks like the x16 can be split on that board to be 8/8, so if needed you can add an adapter to add 2 nvmes.

You can just get an HBA card and add a bunch of drives to that as well if you need more data ports.

I would recommend doing a bit more research on hardware and try and figure out what you need ahead of time. Something like an ASRock motherboard might better in this case. The epyc CPU is fine. But maybe get something with rdimm memory. I would just make sure it has a Management port like ipmi on the supermicro.

[–] ch00f 2 points 1 day ago

I wanted to get something with a lot of upgrade potential, and this was the cheapest option to get my foot in the door with an EPYC processor.

Also needed two PCIe slots that could do at least 8x for the hba card and Intel ARC for video streaming.

[–] [email protected] 3 points 1 day ago* (last edited 1 day ago)

2 HDDs (mirrored zpool), 1 SATA SSD for cache, 32 GB RAM

First read: 120 MB/s

Read while fully cached (obviously in RAM): 4.7 GB/s

[–] one_knight_scripting 7 points 2 days ago* (last edited 2 days ago) (1 children)

Hah. I see your looking into ZFS caching. Highly recommend. I'm running Ubuntu 24.04 Root on ZFS RAID10. Twelve each data drives and one nvme cache drive. Gotta say it's performing exceptionally. ZFS is a bit tricky, it requires an HBA not a RAID card. You may to to flash the raid card to get it working like I did. After that, I have put together a GitHub for the install on ZFS RAID 10, but you should easily be able to change it to RAIDz2. Fair warning, it wipes the drives entirely.

https://github.com/Reddimes/ubuntu-zfsraid10/

[–] ch00f 2 points 2 days ago (2 children)

Picked up a LSI SAS 9305-16I. I was always planning to do software raid, so I think it’ll do the trick for zfs.

[–] [email protected] 1 points 23 hours ago (1 children)

Don't make the same mistake I did. Get a backup in place before using ZFS. Using ZFS and RAIDing your drives together makes them a singular failure point. If ZFS fucks up, you're done. The only way to mitigate this is having another copy in a different pool and preferably different machine. I got lucky that my corrupted ZFS pool was still readable and I could copy files off, but others have not been so lucky.

[–] ch00f 2 points 23 hours ago

Yeah, I wouldn't dare.

The fact that I migrated from a 3 drive to 6 drive mdadm raid without losing anything is a damn miracle.

[–] one_knight_scripting 2 points 2 days ago (1 children)

Hell yeah, it will. I need one of those bad boys.

[–] ch00f 2 points 2 days ago

Lucked out on eBay and got it for $50.

[–] [email protected] 4 points 2 days ago* (last edited 1 day ago)

Spinning disks can perform well if you do it right

https://www.raidz-calculator.com/

Just toss in some high endurance SSD's and you are golden.

[–] [email protected] 7 points 2 days ago (2 children)

Both your RAID and NVMe data should be getting backed up daily to 2 different destinations, if it's irreplaceable.

But to answer your question, just place the DB and cache files for Photoprism on the NVMe, and the photos themselves on the RAID.

[–] ch00f 1 points 2 days ago (2 children)

Can you suggest a method for two-destination daily backups that don’t involve a 3rd party service? At the moment, I’m doing every six months or so on two sets of cold storage, one offsite.

[–] AtariDump 1 points 1 day ago* (last edited 1 day ago) (1 children)
[–] ch00f 2 points 1 day ago (1 children)

Doing that every day feels a bit impractical. I already do that every few months.

[–] AtariDump 1 points 23 hours ago

All a matter of your risk tolerance and how often the data changes.

[–] [email protected] 2 points 2 days ago

That sounds reasonable, although I'd be worried about not having an extra local daily backup in case files get corrupted or accidentally deleted.

[–] just_another_person 14 points 2 days ago* (last edited 2 days ago) (1 children)

You may be confused about the terminology:

RAID = many disks that help prevent data loss

NVME = Just an SSD

A collection of drives that are joined to a RAID group prevent data loss. An NVME drive is just a drive, which has no data loss prevention. If that drive dies, the data dies.

If those docs say anything about SSD, it's because their code is slow, and it will seem less obvious by using a faster disk.

[–] ch00f 3 points 2 days ago* (last edited 2 days ago) (1 children)

I understand all of that. Sorry I didn't explain it well.

I have a RAID6 for data and a single HDD for system files. I'd like to move the HDD to an NVME/SSD. I suppose I could make another RAID with an additional NVME, but I've found it easier to deal with booting off a traditional drive.

My solution for redundancy for the NVME is to just backup the bits that I need every night. These are usually just a few hundred megabyte database files. I'm curious if that's a typical solution.

edit: to clarify, it's a software raid with mdadm.

[–] [email protected] 3 points 2 days ago* (last edited 2 days ago) (1 children)

I just made a mirror out of two NVMes―they got cheap enough not to bother too much with the loss of capacity. Of course, that limits what I can put there, so I use a bit of a tiered storage between my NVMe and HDD pools.

Just think in terms of data loss: are you going to be ok if you lost the data between backups? If the answer is yes, one NVMe is enough.

[–] ch00f 1 points 2 days ago

Yeah, I'm only serving one timezone, so if I can swing nightly backups at periods of low activity, I'd only be out 1 day which isn't that big.

[–] [email protected] 6 points 2 days ago (2 children)

RAID gives you greater uptime. That is all. You should also have backups. So how much uptime do you need?

[–] [email protected] 3 points 2 days ago

It also can improve performance when done with enough disks

[–] ch00f 1 points 2 days ago (1 children)

I’ve heard that too. Hmm.

Up until recently, the server mostly hosted a photo library and media library that didn’t tend to change very often. So a hdd in a fireproof save updated once a year was enough for me.

I guess I’ll have to come up with a better solution. What would you recommend for automatic backups? I’m trying to avoid 3rd party services.

[–] [email protected] 2 points 2 days ago (1 children)

I've switched to restic for my backups and have been very happy with it. Very fast, encrypted and snapshot history.

[–] ch00f 1 points 2 days ago (1 children)

So are you thinking like a raspberry pi with an 18TB hard drive accepting nightly backups through restic?

[–] [email protected] 2 points 1 day ago

Sounds like a good plan to me

[–] [email protected] 4 points 2 days ago (9 children)

Note: If you want to backup a DBMS, you're going to want to use some system that ensures that the backup is atomic.

load more comments (9 replies)
[–] asmoranomar 3 points 2 days ago* (last edited 2 days ago)

If your device permits it, run raid on disc, and use nvme as cache. My Synology does this.

[–] [email protected] 2 points 2 days ago (1 children)

I‘m building this for some time now. I run a raid 1 on mechanical disks for file storage and have an ssd for the logic/database stuff. Have bought a second ssd and will put them in a raid as well bc limited lifespan.

[–] ch00f 2 points 2 days ago (1 children)

Hm... My new motherboard does actually have dual NVME M.2 slots. I might end up doing that (once my budget recovers a bit).

[–] [email protected] 1 points 2 days ago

You dont even need special nvme‘s. You can buy mid range ones.

Best is if they dont fail together so they should best not be the same batch or have the same age and write amount.

I do highly recommend to have a raid array for any server ssd since they fail unpredicably and without any signs.

load more comments
view more: next ›