this post was submitted on 30 Nov 2024
14 points (88.9% liked)

Selfhosted

40493 readers
727 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I currently have a two disk-zfs mirror which has been filling up recently. So I decided to buy anothet drive, but when I started thinking about it, I was unsure on how to actually make it usable. The issue is that I have ~11Tb on the existing pool (2 12 Tb drives (a,b)) and no spare drives of that size to copy over all my data to while creating the new 3-drive pool with the same drives and the additional new drive (c).

I saw there is a way to create a "broken" pool with just two drives (a,b), while keeping the data on the remaining drive (c), then copying the data over to the pool and 'reparing' it afterwards with the new drive (c).

As I only have 11 Tb of data which would theoretically fit one disk, would I be able to:

  • keep the old pool
  • initialize the new pool with just one drive and copy over the data
  • detach one drive from the old pool, add it to the new pool (if possible, would there allready be parity data generated on this drive at that point? Wold the parity be generated in a way that would allow me to lose the other drive in the pool and recover the data from the remaining pool (drive) alone?)
  • destroy old pool, add last drive to new pool

I would be able to back up my important data, but don't have enogh space to also back up my media library which I'd like to not have to rebuild.

alternatively: anyone in Berlin wanna loan me a 12 Tb drive?

you are viewing a single comment's thread
view the rest of the comments
[–] one_knight_scripting 2 points 4 days ago* (last edited 3 days ago) (1 children)

Interesting... Though I know nothing about your particular setup, or migrating existing data, I have a similar project in the works. This project is to automatically setup a ZFS RAID 10 on Ubuntu 24.04.

If you are interested in seeing how I am doing it, I used the openzfs root on Debian/Ubuntu guides.

Debian

Ubuntu

For the code, take a look at this git hub: https://github.com/Reddimes/ubuntu-zfsraid10/

One thing to note is this runs two zpools, one for / and one for /boot. It is also specifcally UEFI and if you need legacy you need to change the partitioning a little bit(see init.sh)

BE WARNED THAT THIS SCRUBS ALL FILESYSTEMS AND DELETES ALL PARTITIONS

To run it, load up a ubuntu-server live cd and run the following:

git clone --depth 1 https://github.com/Reddimes/ubuntu-zfsraid10.git
cd ubuntu-zfsraid10
chmod +x *.sh
vim init.sh    # Change all disks to be relevant to your setup.
vim chroot.sh    # Same thing here.
sudo ./init.sh

On first login, there are a few things I have not scripted yet:

apt update && apt full-upgrade
dpkg-reconfigure grub-efi-amd64

There are two parts to automating this, either I need to create a runonce.d service(here). Or I need to add a script to the users profile.d directory which goes ahead and deletes itself. And also I need to include a proper netplan configuration. I'm simply not there yet.

I imagine in your case you could start a new pool and use zfs send to copy over the data from the old pool. Then remove the old pool entirely and add the old disks to the new pool. I certainly have never done this though and I suspect there may be an issue. The other option you have (if you have room for one more drive) is to configure it into a ZFS RAID 10. Then you don't need to migrate the data, but just need to add an additional vdev mirror with the additional drive and resilver.

One thing I tried to do was to make the scripts easily customizable. It still is not yet ready for that, though. You could simply change the zpool commands in the init.sh.

[–] [email protected] 2 points 3 days ago (1 children)

Sounds interesting, but while I have room for one more drive, I don't want to spend money for one more drive xD (As mentioned, I have >= 12Tb drives, so another one I don't really need would hurt the wallet quite a bit.)

[–] one_knight_scripting 2 points 3 days ago

That is totally fair. Actually I just upgraded to 12 TB drives and that's why I'm working on this. So huge props to your design choice. Also props for using zfs, I feel like it flies under the radar a lot.