one_knight_scripting

joined 10 months ago
[–] one_knight_scripting 1 points 3 weeks ago

I seed what you did there.

[–] one_knight_scripting 2 points 4 weeks ago

I'm gonna disagree with you. They're just trying to share another community where art is focused. Frankly, I'm always looking for more trekkies. ¯\_(ツ)_/¯

[–] one_knight_scripting 3 points 1 month ago

I'm going to go with yes. Since unlike Star Wars, Stargate has multiple enemies and is closer to Star Trek with its episodic nature. Although some(ok, most) of the enemies are based on ancient fantasy characters, I would say that what actually makes them different is simply technology as opposed to any legitimate magic powers(the force). I think that easily takes it from fantasy into Science Fiction. Perhaps this argument loses support when it comes to the ORI. But what about the wraith, the goauld, the ancients, the Jaffa, the tokra, the nox, the replicators, and the Asgard?

[–] one_knight_scripting 2 points 1 month ago

Thanks for the insight!

I may have to resort to using BTRFS for this host eventually if ZFS fails me. I do not expect a lot of duplication on a host, even if I have it, who cares I have 60 TB despite the raid 10 architecture. Having something with kernel support may be a better approach anyways.

It's interesting to me that it struggles with raid 5 and 6 though. I would have expected that to be easy to provide.

[–] one_knight_scripting 2 points 1 month ago* (last edited 1 month ago) (2 children)

BTRFS is a damn good option too. I'm happy to hear how easy it is to use. I haven't used it(yet), I went with ZFS because of its flexible architecture. On a desktop level, BTRFS makes sense, but in a server? What is it like in a Hypervisor?

I'm working on standing up a Cloudstack host as a Hypervisor. Now, I want this host to be able to run 5 kubernetes VMs, so it needs to have quick access to the disks. Now, I do not have a RAID card, only an HBA. In such a scenario, I would typically use a RAID 10. But a ZFS Raid 10 outperforms an mdraid 10 anyways (in terms of writing, not necessarily reading). So that is what I've decided. It may not be a good idea, it may not even be feasible. But I'm heckin willing to give it a shot.

I'm actually jealous that you automatically have built in kernel support though. I am a little curious about BTRFS in terms of how(or if) it connects multiple disks, I'm simply uninformed.

ZFS Performance Sauce

Install Ubuntu 24.04 on ZFS RAID 10 - Github Repository

Edit: There are a few drawdowns to using ZFS, lousy docker performance being one that I've heard about. I'm curious how this will be affected if I have docker running inside a VM.

[–] one_knight_scripting 2 points 1 month ago* (last edited 1 month ago) (1 children)

That's fair. I chose ZFS because I've used it before. And understand it fairly well already. I know nothing about BTRFS, so perhaps you could educate me a little. I'm working on setting up a cloudstack host using ZFS RAID 10. Does BTRFS have a flexible architecture to where you could do something similar?

Edit: Perhaps you could also inform me of the speeds of BTRFS too. From what I understand, ZFS outperforms BTRFS in large datasets, but I don't know where the cutoff is. I'll let you know it would need to run 12 ea 10TB HDDs.

[–] one_knight_scripting 4 points 1 month ago* (last edited 1 month ago) (5 children)

Try Root on ZFS.

If you run into an issue suddenly, you can restore to snapshot.

[–] one_knight_scripting 4 points 1 month ago (3 children)

Uh... I don't have a separate partition for /home. I have a separate zfs filesystem for it though. If I run into issues, I can restore from snapshot and not affect it.

[–] one_knight_scripting 1 points 1 month ago* (last edited 1 month ago)

To go with this you can also look at https://www.goharddrive.com/. They provide white label drives and refurbished drives with a three year warranty in my case. ~$110 USD for a 12TB Seagate Ironwolf.

[–] one_knight_scripting 1 points 1 month ago

Hah. I'm glad I'm not the only one who thought of ZFS. I do have a project involving it though.

[–] one_knight_scripting 1 points 1 month ago* (last edited 1 month ago)

Here is the exact issue that I'm having. I've included screenshots of the command I use to list HDDs on the live cd versus the same command run on Ubuntu 24.04. I don't know anything about what is causing this issue so perhaps this is a time where someone else can assist. Now, the benefit to using /dev/disk/by-id/ is that you can be more specific about the device, so you can be sure that it is connected to the proper disk no matter the state that your environment is in. This is something that you need to do to have a stable ZFS install. But if I can't do that with scsi disks, then that advantage is limited.

Windows Terminal for the win, btw.

Live CD:

Ubuntu 24.04 Installed:

[–] one_knight_scripting 1 points 1 month ago* (last edited 1 month ago) (1 children)

Well... I have to admit my own mistake as well. I did assume it would have faster read and write speeds based upon my raid knowledge and didn't actually look it up until I was questioned about it. So I appreciate being kept honest.

While we have agreed on the read/write benefits of a ZFS RAID 10 there are a few disadvantages to a setup such as this. For one, I do not have the same level of redundancy. A raidz2 can lose two full hard drives. A zfs RAID10 can lose one guaranteed and up to two total. As long as an entire mirror isn't gone, I can lose two. So overall, this setup is less redundant than raidz2.

Another drawback that it faces is that for some reason, Ubuntu 24.04 does not recognize scsi drives except over live CD. Perhaps someone can help me with this to provide everyone with a better solution. Those same disks that were visible on the live CD are not visible once the system is installed. It still technically works, but zpool status rpool will show that it is using sdb3 instead of the scsi hdds. This is fine technically, my hdds are SATA anyways so I just changed to the SATA hdds. But if I could ensure that others don't face this issue, it would result in a more reliable ZFS installation for them.

26
submitted 1 month ago* (last edited 1 month ago) by one_knight_scripting to c/selfhosted
 

Hello there Selfhosted community!

This is an announcement of the completion of a project I've been working on. A Script for installing Ubuntu 24.04 on a ZFS RAID 10. Now, I'd like to describe why I choose to develop this and how I'd like for other people to have access to it as well. Let us start with the hardware.

Now, I am using an old host. My host in particular was originally a BCDR device that was based on a ZFS raidz implementation. Since it was designed for ZFS, it doesn't even have a RAID card, it only has an HBA anyways. So for redundancy, ZFS is a good way to go. Now, even though this was a backup appliance, it did not have root on ZFS. Instead, it had a separate harddrive for the operating system and three individual disks for the zpool. This was not my goal.

So I did a little research and testing. I looked at two particular guides (Debian/Ubuntu). Now, I performed those steps a dozens of times because I kept messing up the little things. And to eliminate the human error(that's me) I decided to just go ahead and script the whole thing.

The Github Repository I linked contains all the code needed to setup a generic ubuntu-server host using a ZFS RAID 10.

Instructions for starting the script are easy. Boot up a live cd(https://ubuntu.com/download/server). Hit CTRL+ALT+F2 to go into the shell. Run the following command:

bash <(wget -qO- https://raw.githubusercontent.com/Reddimes/ubuntu-zfsraid10/refs/heads/main/tools/install.sh)

This command does clone the repository, changes directory into it, and runs the entrypoint(sudo ./init.sh). Hopefully, this should be easy to customize to meet your needs.

More Engineering details are on the Github.

view more: next ›