this post was submitted on 09 Jan 2025
21 points (88.9% liked)

Selfhosted

41004 readers
1373 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

I recently moved my files to a new zfs-pool and used that chance to properly configure my datasets.

This led me to discovering zfs-deduplication.

As most of my storage is used by my jellyfin library (~7-8Tb), which is mostly uncompressed bluray rips I thought I might be able to save some storage using deduplication in addition to compression.

Has anyone here used that for similar files before? What was your experience with it?

I am not too worried about performance. The dataset in question is rarely changed. Basically only when I add more media every couple of months. I also have overshot my cpu-target when originally configuring my server so there is a lot of headroom there. I have 32Gb of ram which is not really fully utilized either (but I also would not mind upgrading to 64 too much).

My main concern is that I am unsure it is useful. I suspect just because of the amount of data and similarity in type there would statistically be a lot of block-level duplication but I could not find any real world data or experiences on that.

all 17 comments
sorted by: hot top controversial new old
[–] [email protected] 2 points 11 hours ago

I worked with dedupe products at a previous job. Media files generally deduplicate poorly.

[–] [email protected] 11 points 1 day ago (1 children)

Uncompressed Blu-ray rips are almost the same size when compressed with lossless compression. The binary content of h264 files are almost random bits so deduplication is almost a waste of CPU time. Maybe you can save space from the useless repeated media like trailers and other ads in the bluray isos

[–] [email protected] 1 points 6 hours ago

Thank you, I didn't know that, but that was the type of answer I was looking for :D.

I only ripped the relevant media anyways, so no trailers to remove.

[–] [email protected] 24 points 2 days ago (2 children)

You should maybe read about the use cases for deduplication before using it. Here's one recent article:

https://despairlabs.com/blog/posts/2024-10-27-openzfs-dedup-is-good-dont-use-it/

If you mostly store legit Blu-ray rips, the answer is probably no, you should not use zfs deduplication.

[–] [email protected] 3 points 1 day ago (1 children)

I’m in almost the exact same situation as OP, 8 TB of raw Blu-ray dumps except I’m on XFS. I ran duperemove and freed ~200 GB.

[–] [email protected] 2 points 6 hours ago (1 children)

I think I was a bit unclear on that, I meant uncompressed rips as in I ripped the relevant media to unkompressed mkvs, I didn't save the entire disk dump. I also have mostly such rips, but also a bit of media from other sourches ™ which is already compressed. So I suspect my results would be even worse.

[–] [email protected] 1 points 5 hours ago

I agree. Most of my duplicates came from the raw disc files. I too dump some content to MKV (mainly TV episodes) but those files likely have much less duplication, though I do recall some of the duplicates coming from The Office in MKV.

(I do wonder if those The Office duplicates were something like the opening title, or scenes from the episode showing clips from previous episodes because it seems highly unlikely that the raw video streams were similar.)

[–] friend_of_satan 4 points 2 days ago* (last edited 2 days ago)

I was also going to link this. I started using zfs 10-ish years ago and used dedup when it came out, and it was really not worth it except for archiving a bunch of stuff I knew had gigs of duplicate data. Performance was so poor.

[–] [email protected] 8 points 2 days ago

I think the universal consensus is that outside of a very specific use case: multiple VDI desktops that share the same image, ZFS dedupe is completely useless at best and will destroy your dataset at worst by causing to be unmountable on any system that has less RAM than needed. In every other use case, the savings are not worth the trouble.

Even in the VDI use case, unless you have MANY copies of said disk images(like 5+ copies of each), it’s still not worth the increase in system resources needed to use ZFS dedupe.

It’s one of those “oooh shiny” nice features that everyone wants to use, but will regret it nearly every time.

[–] greyfox 4 points 2 days ago

Like most have said it is best to stay away from ZFS deduplication. Especially if your data set is media the chances of an entire ZFS block being the same as any other is small unless you somehow have multiple copies of the same content.

Imagine two mp3s with the exact same music content but with slightly different artist metadata. A single bit longer or shorter at the beginning of the file and even if the file spans multiple blocks ZFS won't be able to duplicate a single byte. A single bit offsetting the rest of the file just a little is enough to throw off the block checksums across every block in the file.

To contrast with ZFS, enterprise backup/NAS appliances with deduplication usually do a lot more than block level checks. They usually check for data with sliding window sizes/offsets to find more duplicate data.

There are still some use cases where ZFS can help. Like if you were doing multiple full backups of VMs. A VM image has a fixed size so the offset issue above isn't an issue, but if beware that enabling deduplication for even a single ZFS filesystem affects the entire pool, even ZFS filesystems that have deduplication disabed. The deduplication table is global for the pool and once you have turned it on you really can't get rid of it. If you get into a situation where you don't have enough memory to keep the deduplication table in memory ZFS will grind to a halt and the only way to completely remove deduplication is to copy all of your data to a new ZFS pool.

If you think this feature would still be useful for you, you might want to wait for 2.3 to release (which isn't too far off) for the new fast dedup feature which fixes or at least prevents a lot of the major issues with ZFS dedup

More info on the fast dedup feature here https://github.com/openzfs/zfs/discussions/15896

[–] nottelling 5 points 2 days ago (1 children)

ZFS dedup is memory constrained, and the memory use scales with the block hashes.

If performance isn't a concern, you're better off compressing your media. You'll get similar storage efficiency with less crash consistency risk.

[–] [email protected] 4 points 2 days ago (1 children)

ZFS in general is pretty memory hungry. I set up my proxmox sever with zfs pools a while ago and now I kind of regret it. ZFS in itself is very nice and has a ton of useful features, but I just don't have the hardware nor the usage pattern to benefit from it that much on my server. I'd rather have that thing running on LVM and/or software raid to have more usable memory for my VM's. And that's one of the projects I've been planning for the server, replace zfs pools with something which suits my usage patterns better, but that's a whole another story and requires some spare money and some spare time, which I don't really either at hand right now.

[–] [email protected] 1 points 17 hours ago (1 children)

Just adjust it if you actually need the RAM and it isn't relinquishing quickly enough.

options zfs zfs_arc_max=17179869184 in /etc/modprobe.d/zfs.conf, update-initramfs -u, reboot - this will limit ZFS ARC to 16GiB.

arc_summary to see what it's using now.

As for using a simple fs on LVM, do you not care about data integrity?

[–] [email protected] 1 points 9 hours ago

this will limit ZFS ARC to 16GiB.

But if I have 32GB to start with, that's still quite a lot and, as mentioned, my current usage pattern doesn't really benefit from zfs over any other common filesystem.

As for using a simple fs on LVM, do you not care about data integrity?

Where you get that from? LVM has options to create raid volumes and, again as mentioned, I can mix and match those with software raid however I like. Also, single host, no matter how sophisticated filesystems and raid setups, doesn't really matter when talking about keeping data safe, that's what backups are for and it's a whole another discussion.

[–] [email protected] 3 points 2 days ago

Something like fslint might be what you want. Scan folders, lists duplicates, you set how you want to deal with them. Its more manual, but I think it is what you are actually trying to achieve.

[–] AbouBenAdhem 2 points 2 days ago* (last edited 1 day ago) (1 children)

I haven’t tried it because I’ve read a lot of negative discussions of it—and because (by my understanding) the only reasonable use case would be if there were a large number of users and each user is likely to have copies of the same files but don’t want to expose their files to each other (so you can't just manually de-dupe).