this post was submitted on 04 Aug 2023
8 points (83.3% liked)

Selfhosted

40410 readers
212 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I'm looking at different options for getting a NAS/RAID array system that is tolerant to not just hard drive failures but also to hardware/firmware and board failures. I've utilized a RAID array in the past that was built into the motherboard, which resulted in the motherboard failing and me having to ebay another one to get the RAID array back up and running. Then I bought a NAS 2 bay drive that was only compatible with drives up to 1.5TB. I've also used external drives for backup since I've been burned by hardware/firmware/software issues related to RAID arrays. Are there are any PCI RAID cards, NAS boxes or software RAID or other options where the hard drives would still be readable by other RAID cards if the boards failed? Maybe a software RAID solution? Any thoughts would be appreciated.

all 28 comments
sorted by: hot top controversial new old
[–] mholiv 6 points 1 year ago* (last edited 1 year ago) (3 children)

If you know Linux, I recommend going with some form of software raid. A lot of people might recommend ZFS but I would recommend btrfs with Linux. Using btrfs you can add and removed drives of any size at will unlike ZFS. That and with btrfs you don’t need to worry about vdevs and stuff. Simple, easy to use, and simple to upgrade. Just use btrfs, set data to raid1 and metadata to raid1c3 and you will have a rock solid system. That and you won’t have to worry about dkms or kernel changes breaking your data storage. Also before someone mentions it there was a btrfs raid5 write hole but that was fixed in Kernel 6.2

Another future interesting option might be btreefs. Just got merged into kernel mainline and has amazing features.

Lastly you really want all of your drives to be connected via SATA or SAS or M.2. USB isn’t great for HDDs in any sort of raid.

[–] [email protected] 2 points 1 year ago (1 children)

Thanks, I've had Redhat/Fedora and Ubuntu/Mint systems, so this should not be an issue. What flavor of Linux are you running?

I'd like to set up RAID1 or 10 with SATA drives so btrfs sounds doable. Although Ars gave btrfs a pretty good drubbing here a few years ago: https://arstechnica.com/gadgets/2021/09/examining-btrfs-linuxs-perpetually-half-finished-filesystem/

[–] mholiv 4 points 1 year ago (2 children)

If you are using fedora (with a recent install) you are using btrfs right now. 😉

My desktop and laptop run fedora. For my servers I run Debian 12 with everything in docker.

As for that article yah. Btrfs has had some rough points in its past. It’s true. Can’t deny it. This being said I would hold that the way btrfs treats raid definitions in unconventional ways is a design advantage.

Btrfs raid1 is more like “replica 2”. If you have one 12TB drive and two 6TB drives you get 12TB of useable space. Because btrfs will work to ensure there are two replicas behind the scenes. In a traditional raid 1 you could not use the space from the mismatched drives. It’s not traditional raid1 but I think it’s preferable.

I think the main advantage for btrfs for home lab is that you can toss in any drive regardless of size and btrfs will use it. You can remove any drive and btrfs will auto rebalance. These are btrfs exclusive features. You can also change the raid type on the fly too. Once I get kernel 6.2 I could live convert my btrfs raid1 pool into a raid 5 pool while the pool is live and mounted. You can do any such live mounted conversions on the fly.

For me home lab is being flexible and working with what you can get. And I feel btrfs is a great fit for that.

That and since btrfs if a native part of the kernel you won’t ever have to worry about updating your kernel and breaking the ZFS shim or dkms.

[–] peregus 1 points 1 year ago (1 children)

You seems prepared on the topic so I'll jump in with a question: what do you think about SnapRAID? What are the advantages of birds? I've tried SnapRAID because it seems so malleable and easy, but (~~maybe~~ for my luck of knowledge) I don't feel safe with it because I don't know what's going on, when it should do scarubs and sync and when it did do them. How do you see btrfs drives for OMV in Proxmox?

[–] mholiv 1 points 1 year ago

Hey. I’m not an expert in SnapRAID but all of these specialized systems used one of the following under the hood

  1. Btrfs
  2. ZFS
  3. MDADM
  4. Some combination of 1,2, and 3.
  5. Some proprietary system (like drobo)

As long as the system is widely used, send you alerts when things go wrong, and uses 1-4 it should be ok. Just don’t go proprietary and make sure you get alerts when a drive fails and you should be ok.

[–] [email protected] 1 points 1 year ago (1 children)

Good point. I'm leaning toward running the RAID as part of the OS rather than having either a dedicated NAS OS like xigmaNAS or TrueNAS, since I'd like to still use the computer for things outside just the NAS specialty that those offer. I'm still looking into the snapRAID which is more of a backup rather than RAID option. I have 4 HDs right now and have room up to 6, and that's all I really need. With btrfs RAID, if my motherboard fails or if I have to reinstall or change the OS, will any new system with a different motherboard and operating system that recognizes btrfs still be able to read the existing RAID array on the drives, without needing previous hardware/firmware/OS info?

[–] mholiv 5 points 1 year ago

The only thing you need to do if you run a standard Linux distro is to set up scheduled scrubbing and smart alerts. Nas OSes do that by default. But if you set it up as a cronjob or systemd timer you can achieve the same result.

The advantage of running a Linux distro over a Nas OS is that you could add virtual machines on top via kvm or run appliances via docker. It’s just a sever with a lot of storage added on top.

As for btrfs raid. Yes. If you motherboard fails or you have to reinstall the OS you can reimport it with no prior existing knowledge. It’s simple mounting it like a normal Linux file system because it is one. The kernel will locate all members of the raid pool.

[–] mea_rah 1 points 1 year ago (1 children)

Out of curiosity in your experience, are issues mentioned in this article actually fixed now? They mention the write hole, so that was fixed. What about the rest?

[–] mholiv 2 points 1 year ago (1 children)

The crux of the matter is that the article's criticisms of btrfs are largely based on its differences from ZFS, rather than any inherent flaws in btrfs itself. Notably, Suse Enterprise Linux, Fedora, and Meta's Linux engineers all advocate for btrfs, using it extensively in production.

The article's main grievances are:

Btrfs RAID Arrays:

The author is upset that btrfs RAID arrays don't function as he anticipated. However, btrfs isn't ZFS or mdadm; it's its own system and should be understood as such. The author criticizes btrfs for allowing drives of mismatched sizes. This flexibility, however, isn't inherently negative.

Btrfs RAID Array Management:

The author laments that btrfs can't be mounted by a human-readable name like ZFS, and instead requires UUID. Using UUID is standard practice for native Linux file systems. A side note: mounting by drive letter is outdated; UUID is the recommended method.

Btrfs-RAID's Redundancy:

The author points out that btrfs won't auto-mount an array if a drive fails, while ZFS will. This is actually a protective measure. By not auto-mounting, it minimizes the risk of further drive failures, prioritizing data preservation.

Btrfs-RAID Maintenance:

The author's complaint here boils down to "btrfs isn't ZFS." He attempts ZFS recovery methods on btrfs and is surprised when they don't work. The processes are different, but that doesn't mean btrfs is more labor-intensive.

He also critiques the use of crc32 for corruption detection. If this is a concern, other algorithms can be used. The default, crc32, is chosen for its speed. In fact, some argue that btrfs's integrity checks are faster than alternatives.

In summary, the article's author seems primarily upset that btrfs isn't a ZFS clone. He overlooks btrfs's advantages over ZFS, such as ZFS pools occasionally failing to mount due to kernel updates. On the other hand, major entities like Suse Enterprise Linux, Fedora, and Meta rely on btrfs in large-scale production environments.

When revisiting the article, keep the perspective of "an individual frustrated that btrfs isn't ZFS" in mind. The bias becomes evident.

[–] mea_rah 1 points 1 year ago (1 children)

The author is upset that btrfs RAID arrays don't function as he anticipated. However, btrfs isn't ZFS or mdadm; it's its own system and should be understood as such.

I'd say it's quite reasonable critique, because RAID1 is kind of industry standard. I can't think of any other RAID (HW or SW) that would do RAID1 in this way. If btrfs decided to call their implementation raid1 while it really isn't raid1 in some major way, it was very bad idea. I don't agree it's documentation issue, it's really bad name choice. ZFS has raidz that does something similar to btrfs raid1 and the name does not lead to confusion. RAID1 system should never lead to decreased reliability with increasing number of drives.

The author points out that btrfs won't auto-mount an array if a drive fails, while ZFS will. This is actually a protective measure. By not auto-mounting, it minimizes the risk of further drive failures, prioritizing data preservation.

RAID is uptime preserving mechanism. If anyone uses RAID for data preservation purposes, they are setting themselves for a nasty surprise. RAID system that does not mount in reduced redundancy situation is very bad design. It effectively sacrifices usability of RAID to serve other purpose that RAID system does not really need nor should be used for.

He attempts ZFS recovery methods on btrfs and is surprised when they don't work.

I felt that way as well, but I think they raised one important point - there was no indication that the array was still in reduced redundancy state after their "attempt at recovery". ZFS is very clear about the state of array at every step. Same for other RAID systems including some HW based ones. Every single one I've used were very clear about the fact that array isn't fully redundant.

In summary, the article's author seems primarily upset that btrfs isn't a ZFS clone.

FWIW I didn't have that impression. I have experience with multiple RAID controllers and multiple SW RAID systems and his points would be valid with any of those.

Anyways thank you for your reply. It's not the answer I was hoping for and I don't agree with your views on some of these issues. But it gives me pretty good idea of the current state of the filesystem.

[–] mholiv 1 points 1 year ago (1 children)

Hey. No problem. Something to keep an eye out for in the future might be bcachefs. I think it’s a step up above ZFS and btrfs. The author missed the last merge window by days but it should make it into the next kernel merge window. It’s exciting stuff. Other options might be a local GlusterFS or CephFS setup.

[–] mea_rah 1 points 1 year ago (1 children)

Oh wow, thanks. I've read about bcachefs long time ago. I didn't realize it go that far since. That's definitely something I'm very curious to try.

[–] mholiv 1 points 1 year ago (1 children)

Me too. I am really looking forward to the tiered storage system. NVME backed by HDDs backed by SMR HDDs. You write to the the NVME drives and in the background bcachefs slowly moves it to the slower mediums.

[–] mea_rah 1 points 1 year ago

Cries in TBW endurance, but also yes please!

[–] [email protected] -1 points 1 year ago (2 children)

I'm going to say avoid btrfs, it's still basically in beta. I want to see wide use in industry and functions like competitors - a la mdadm / vdev and ZFS have.

[–] [email protected] 5 points 1 year ago

OpenSuse, and its commercial sister have been default using btrfs for almost a decade. The "btrfs is beta" meme is a dead horse. Its a great file system for what it was designed to do.

[–] mholiv 4 points 1 year ago (1 children)

Factually here you are wrong. Btrfs has been around for more than 10 years and is used at scale. Meta uses it at scale in their data centers, Suse Linux uses it as their default file system and uses the btrfs rollback/roll forward as part of their enterprise offerings. Fedora uses it as its default file system too.

If you prefer/know ZFS and want to avoid btrfs because of that I get it. But no need to say that btrfs is “in beta” 😂

[–] [email protected] 1 points 1 year ago (1 children)

Fair enough. Last time I checked, I saw enough people warning against btrfs that I just figured it wasn't going to catch up to ZFS and kind of forgot about it. Now I realize that may have been awhile ago, and if it's not in RHEL, I haven't considered it as enterprise ready - which recently is changing with Red Hat / IBM losing their darn minds, but my "working knowledge" is limited on stuff I don't watch all the time.

[–] mholiv 1 points 1 year ago

Fair fair. This being said ZFS isn’t in RHEL either. 🤔 Poor Red hat though. I used to work there a long time ago. I’m sad to see how they went from being THE open spice company to being worse than Oracle 🤢 when it comes to source distribution.

[–] [email protected] 4 points 1 year ago* (last edited 1 year ago) (2 children)

ZFS zRAID is pretty good for this I think. You hook up the drives from one "pool" to a new machine, and ZFS can detect them and see that they constitute a pool and import them.

I think it still stores some internal references to which drives are in the pool, but if you add the drives from the by-ID directory when making the pool it ought to be using stable IDs at least across Linux machines.

There's also always Git Annex for managing redundancy at the file level instead of inside the filesystem.

[–] PriorProject 3 points 1 year ago

ZFS zRAID is pretty good for this I think. You hook up the drives from one "pool" to a new machine, and ZFS can detect them and see that they constitute a pool and import them.

I second this approach, but if one isn't down with ZFS, LVM can bodge a raid onto any filesystem at the block layer. I don't remember when I got over hardware raid envy and decided that I preferred software raid for my home lab, but it was a long while ago and I've never regretted it. Being able to plug some drives into any old USB, sata, or whatever port on any Linux box is super valuable when things start going sideways and you don't have budget for spare hardware or rapid-response support contracts.

[–] [email protected] 1 points 1 year ago (1 children)

Thanks, reading up on ZFS now on Ars https://arstechnica.com/information-technology/2020/05/zfs-101-understanding-zfs-storage-and-performance/

Sounds like I could dedicate a server machine to run a zRAID 1,2 or 3 with ZFS drives running on Linux or TrueNAS? Or were you thinking something a bit different for a setup?

[–] [email protected] 1 points 1 year ago

That could work fine, probably? Or you could use it on the same machine as other stuff.

[–] ramielrowe 4 points 1 year ago

Keep in mind, RAID is fault tolerant, not fault proof. For critical data, keep in mind the 3-2-1 rule. Stored in 3 locations, 2 separate mediums, 1 offsite.