this post was submitted on 02 Dec 2024
26 points (96.4% liked)

Selfhosted

40491 readers
663 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Hello there Selfhosted community!

This is an announcement of the completion of a project I've been working on. A Script for installing Ubuntu 24.04 on a ZFS RAID 10. Now, I'd like to describe why I choose to develop this and how I'd like for other people to have access to it as well. Let us start with the hardware.

Now, I am using an old host. My host in particular was originally a BCDR device that was based on a ZFS raidz implementation. Since it was designed for ZFS, it doesn't even have a RAID card, it only has an HBA anyways. So for redundancy, ZFS is a good way to go. Now, even though this was a backup appliance, it did not have root on ZFS. Instead, it had a separate harddrive for the operating system and three individual disks for the zpool. This was not my goal.

So I did a little research and testing. I looked at two particular guides (Debian/Ubuntu). Now, I performed those steps a dozens of times because I kept messing up the little things. And to eliminate the human error(that's me) I decided to just go ahead and script the whole thing.

The Github Repository I linked contains all the code needed to setup a generic ubuntu-server host using a ZFS RAID 10.

Instructions for starting the script are easy. Boot up a live cd(https://ubuntu.com/download/server). Hit CTRL+ALT+F2 to go into the shell. Run the following command:

bash <(wget -qO- https://raw.githubusercontent.com/Reddimes/ubuntu-zfsraid10/refs/heads/main/tools/install.sh)

This command does clone the repository, changes directory into it, and runs the entrypoint(sudo ./init.sh). Hopefully, this should be easy to customize to meet your needs.

More Engineering details are on the Github.

top 9 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 2 days ago (1 children)

Why RAID 10 and not RAID-z2?

[–] one_knight_scripting 2 points 2 days ago* (last edited 2 days ago) (1 children)

I had disk performance in mind. A ZFS RAID 10 beats a raid-z2 in terms of read and write speeds. According to the Internet, that is. My instance will become a KVM host to hold 5 kubernetes VMs, so it kinda needs a little bit of a boost. Who knows if it will work, I'm told that my buddy had issues with hdds and had to go to ssds.

[–] [email protected] 2 points 2 days ago* (last edited 2 days ago) (1 children)

Not quite, a Raid 10 offers you faster writes but slower reads, it's not universally faster

[–] [email protected] 3 points 2 days ago (1 children)

Compared to Z2? Not according to the link they just provided.

[–] [email protected] 3 points 2 days ago* (last edited 2 days ago) (2 children)

You're right, I must've still been half a sleep or something because I swear when I read that earlier I read the Read Speeds flipped(so Raid 10 read speed as belonging to Raid-Z2 and vice versa)... my bad

[–] one_knight_scripting 1 points 2 days ago* (last edited 2 days ago) (1 children)

Well... I have to admit my own mistake as well. I did assume it would have faster read and write speeds based upon my raid knowledge and didn't actually look it up until I was questioned about it. So I appreciate being kept honest.

While we have agreed on the read/write benefits of a ZFS RAID 10 there are a few disadvantages to a setup such as this. For one, I do not have the same level of redundancy. A raidz2 can lose two full hard drives. A zfs RAID10 can lose one guaranteed and up to two total. As long as an entire mirror isn't gone, I can lose two. So overall, this setup is less redundant than raidz2.

Another drawback that it faces is that for some reason, Ubuntu 24.04 does not recognize scsi drives except over live CD. Perhaps someone can help me with this to provide everyone with a better solution. Those same disks that were visible on the live CD are not visible once the system is installed. It still technically works, but zpool status rpool will show that it is using sdb3 instead of the scsi hdds. This is fine technically, my hdds are SATA anyways so I just changed to the SATA hdds. But if I could ensure that others don't face this issue, it would result in a more reliable ZFS installation for them.

[–] one_knight_scripting 1 points 1 day ago* (last edited 1 day ago)

Here is the exact issue that I'm having. I've included screenshots of the command I use to list HDDs on the live cd versus the same command run on Ubuntu 24.04. I don't know anything about what is causing this issue so perhaps this is a time where someone else can assist. Now, the benefit to using /dev/disk/by-id/ is that you can be more specific about the device, so you can be sure that it is connected to the proper disk no matter the state that your environment is in. This is something that you need to do to have a stable ZFS install. But if I can't do that with scsi disks, then that advantage is limited.

Windows Terminal for the win, btw.

Live CD:

Ubuntu 24.04 Installed:

[–] [email protected] 1 points 2 days ago
[–] one_knight_scripting 3 points 2 days ago

oOoo... 10 upvotes on a zfs raid 10. I feel like I got the perfect amount.