Everything ZFS

276 readers
1 users here now

A community for the ZFS filesystem.

ZFS is an opensource COW filesystem used by enterprise and serious homelabbers for it's data safety and extensive feature set.

OpenZFS is the active branch now developed primarily for Linux with a port to it's FreeBSD roots.

This community is here to answer questions and discuss topics related to the use of ZFS in the wild.

Rules:

As always, the main rule is Don't Be a Dick. Be polite with new users asking questions that you may consider obvious. If you don't have something constructive to offer, downvote and move on.

No dirty deletes: your posts are here for posterity, perhaps the next person will get something out of it, even if it's wrong.

founded 2 years ago
MODERATORS
1
 
 

Looking for thoughts/opinions

I have a 5 disc raidz1 array. The volumes are accumulating CKSUM errors - fairly evenly distributed over the discs. I've been lazy and let this progress to the point where there are permanent errors in files.

# zpool status -v
  pool: tank
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub repaired 748K in 06:17:19 with 1 errors on Sun Jul 14 06:41:22 2024
config:

        NAME                                 STATE     READ WRITE CKSUM
        tank                                 ONLINE       0     0     0
          raidz1-0                           ONLINE       0     0     0
            ata-ST8000VN004-2M2101_WSD13YBW  ONLINE       0     0     6
            ata-ST8000VN004-2M2101_WSD13YE4  ONLINE       0     0     7
            ata-ST8000VN004-2M2101_WSD1454G  ONLINE       0     0     8
            ata-ST8000VN004-2M2101_WSD1454W  ONLINE       0     0     6
            ata-ST8000VN004-2M2101_WSD14563  ONLINE       0     0     7

errors: Permanent errors have been detected in the following files:

        /you/do/not/need/this/level of detail.txt

I've done some research and believe (hope) that the cause of these errors is the "domestic" onboard SATA controllers I'm using and I have ordered a LSI SAS3008 9300-8i HBA as an upgrade.

I know I can fix the permanent error by deleting and restoring it and then running a scrub. But, I'm torn - should I scrub now and risk stressing it more on the crappy SATA controllers, or wait until I get the new HBA (in a few weeks - free cheap, slow, shipping)?

2
4
submitted 1 year ago* (last edited 1 year ago) by JustineSmithies to c/zfs
 
 

One for those running #ZFS on #Linux systems. I realise that you can't have hibernate aka suspend to disk on a swap in the encrypted zpool but if you don't use hibernation then is it OK to use swap using the likes of the command example below to set it up in the encrypted pool?

I should point out that I'm thinking of switching from my current Void Linux luks lvm setup to Void with fully encrypted zpool and zfsbootmenu on my ThinkPad P14s AMD Gen 1which has 16Gb ram that I may upgrade to 40Gb. It also has a 1Tb nvme.

zfs create -V "${v_swap_size}G" -b "$(getconf PAGESIZE)" -o compression=zle -o logbias=throughput -o sync=always -o primarycache=metadata -o secondarycache=none -o com.sun:auto-snapshot=false "$v_rpool_name/swap"

3
4
 
 

Good day everyone, I need some help with my zpool I created a long time ago. I have 8 Drives in a Z1 raid, each are 3Tb Seagate 7200rpm SAS drives. A couple of weeks ago I had a drive start trowing some errors after going strong for almost 4 years so I quickly replaced it with a spare I had just ordered in. I wasn't totally sure what commands to run so I looked around on a few forums and on the zfs wiki as well and found it would be a simple few commands:

sudo zpool offline TheMass 9173635512214770897

sudo zpool labelclear /dev/sdc

sudo zpool replace TheMass 9173635512214770897 /dev/sdc

As Context here is my lsblk output:

sda 8:0 0 2.7T 0 disk
└─md124 9:124 0 2.7T 0 raid0 ├─md124p1 259:11 0 2G 0 part
└─md124p2 259:12 0 2.7T 0 part
sdb 8:16 0 2.7T 0 disk
└─md121 9:121 0 2.7T 0 raid0 ├─md121p1 259:17 0 2G 0 part
│ └─md116 9:116 0 2G 0 raid1 └─md121p2 259:18 0 2.7T 0 part
sdc 8:32 0 2.7T 0 disk
├─sdc1 8:33 0 2.7T 0 part
└─sdc9 8:41 0 8M 0 part
sdd 8:48 0 2.7T 0 disk
└─md125 9:125 0 2.7T 0 raid0 ├─md125p1 259:9 0 2G 0 part
└─md125p2 259:10 0 2.7T 0 part
sde 8:64 0 2.7T 0 disk
└─md120 9:120 0 2.7T 0 raid0 ├─md120p1 259:19 0 2G 0 part
└─md120p2 259:20 0 2.7T 0 part
sdf 8:80 0 2.7T 0 disk
└─md123 9:123 0 2.7T 0 raid0 ├─md123p1 259:13 0 2G 0 part
│ └─md117 9:117 0 2G 0 raid1 └─md123p2 259:14 0 2.7T 0 part
sdg 8:96 0 2.7T 0 disk
└─md122 9:122 0 2.7T 0 raid0 ├─md122p1 259:15 0 2G 0 part
│ └─md116 9:116 0 2G 0 raid1 └─md122p2 259:16 0 2.7T 0 part
sdh 8:112 0 2.7T 0 disk
└─md119 9:119 0 2.7T 0 raid0 ├─md119p1 259:21 0 2G 0 part
│ └─md117 9:117 0 2G 0 raid1 └─md119p2 259:22 0 2.7T 0 part

I removed the old sdc drive, and replaced it with a new one and then ran those commands, The pool began to re-silver and I thought everything was alright until I noticed the new sdc drive didn't have all the other formatting on it like the other drives, and my performance isn't what it use to be. My pool is up and running given zpool status:

`pool: TheMass state: ONLINE scan: scrub repaired 0B in 03:47:04 with 0 errors on Fri Oct 6 20:24:26 2023 checkpoint: created Fri Oct 6 22:14:02 2023, consumes 1.41M config:

NAME                        STATE     READ WRITE CKSUM
TheMass                     ONLINE       0     0     0
  raidz1-0                  ONLINE       0     0     0
    md124p2                 ONLINE       0     0     0
    scsi-35000c500562bfc4b  ONLINE       0     0     0
    md119p2                 ONLINE       0     0     0
    md121p2                 ONLINE       0     0     0
    md122p2                 ONLINE       0     0     0
    md123p2                 ONLINE       0     0     0
    md125p2                 ONLINE       0     0     0
    md120p2                 ONLINE       0     0     0`

So my question is, did I do this correctly? if not, what and where did I go wrong so I can fix this? Also, if you could give me the commands that I would need, that would be amazing!

If theres any other commands you need me to run for information just let me know!

5
 
 

(I feel like I've violated plenty of common-sense stuff while learning ZFS. If so, you can let me have it in the comments.)

I have a large RAID-Z1 array that I back up to a USB 3.0 hard drive. I'd also like to maintain zfs-auto-snapshot's snapshots as well. I've been doing a zfs send MyArray | zfs recv Backup and it's been working pretty well; however, once in a while, the array will become suspended due to an I/O error, and I'll either need to reboot, or rarely, destroy the backup disk and re-copy everything.

This seems like I'm doing something wrong, so I'd like to know: what is the best way to back up a ZFS array to a single portable disk?

I would prefer a portable backup solution like backing up to a single USB hard drive (preferably so I can buy two and keep one somewhere secure), but I'm open to getting a small NAS and zfs sending over a network.

6
 
 

I'm trying to install Proxmox on a server that is going to be running Home Assistant, a security camera NVR setup and other sensitive data, I need to have the drives be encrypted with automatic decryption of drives so the VMs can automatically resume after a power failure.

My desired setup:

  • 2 Sata SSDs boot drives in a ZFS mirror
  • 1 NVME SSD for L2ARC and VM storage
  • 3 HDDs in a RAIDz1 for backups and general large storage
  • 1 (maybe more added later) HDD for Camera NVR VM.

I'd prefer every drive encrypted with native ZFS encryption automatically decrypted by either TPM 2.0 or manually by a passphrase if needed as a backup.

Guide I found:

I found a general guide on how to do something similar but it honestly went over my head (I'm still learning) and didn't include much information about additional drives: Proxmox with Secure Boot and Native ZFS Encryption

If someone could adapt that post into a more noob friendly guide for the latest Proxmox version, with directions for decryption of multiple drives, that would be amazing and I'm sure it would make an excellent addition to the Proxmox wiki ;)

My 2nd preferred setup:

  • 2 Sata SSDs boot drives in a ZFS mirror with LUKS encryption and automatic decryption with clevis.
  • All other drives encrypted using ZFS native encryption with ZFS key (keys?) stored on LUKS boot drive partition.

With this arrangement, every drive could be encrypted at rest and decrypted on boot with native ZFS encryption on most drives but has the downsides of using LUKS on ZFS for the boot drives.

Is storing the ZFS keys in a LUKS partition insecure in some way? Would this result in undecryptable drives if something happened to ZFS keys on the boot drive or can they be also decrypted with a passphrase as a backup?

As it stands right now, I'm really stuck trying to figure this out so any help or well written guides are heavily appreciated. Thanks for reading!

7
 
 

There are also other subforums for Proxmox and TrueNAS there.

8
 
 

There is also a Proxmox subforum since there is quite a lot of overlap between the two.

9
23
submitted 2 years ago* (last edited 2 years ago) by ikidd to c/zfs
 
 

I wanted to share with y’all a new file system project I’ve been working on: speZFS

speZFS is based on the principle that your data doesn’t actually belong to you—and you should learn to like it that way. At every possible turn, speZFS goes out of its way to show contempt for the user, including such features as:

Data reliability: With speZFS, “integrity” is a dirty word, so we use the term “reliability” instead. What that actually means is that your data is likely to be silently edited on disk, with no notice given to the user that this has occurred. Should this reliability feature be noticed by the user, speZFS responds by raising EXCEPTION_OOPSSORRYLOL and continuing as if nothing ever happened.

Advanced file permissions: No longer are files exclusively available to the “landed gentry” just because they happened to create them. Any user who refuses to allow global access to their files will find their access revoked, and new file owners instated.

Introspection protection: This cutting-edge feature actively prevents users from finding out what the hell is actually going on, by providing misleading information to debugging tools, filesystem analyzers, decompilers, and so on. In essence, any attempt to “ask” speZFS what it’s doing or why will be met with useless stock answers, insults, and/or outright threats.

Dedicated suite of file access tools: speZFS comes with a set of tools designed specifically use with it. These include spezfs-ls (injects advertisements into the file listing), spezfs-rm (only allows you to remove a single file at a time, which is subject to being arbitrarily restored later), spezfs-cp (claims ownership of your copied files, and sells access to them for use in AI training models), and spezfs-find (does nothing). Want to use your own tools? No problem! Access to so-called “third-party” filesystem tools will be allowed free for one month after installation, and thereafter at the bargain price of $0.24 per 10,000 file accesses.

My hope is that you find speZFS to be a useful, well-designed, and overall great filesystem… and if not, who the hell cares what you think anyway?

Credit: https://www.reddit.com/r/zfs/comments/14gh8ud/announcing_a_new_file_system_spezfs/