this post was submitted on 05 Sep 2024
40 points (100.0% liked)

Selfhosted

40172 readers
862 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

So, I'm selfhosting immich, the issue is we tend to take a lot of pictures of the same scene/thing to later pick the best, and well, we can have 5~10 photos which are basically duplicates but not quite.
Some duplicate finding programs put those images at 95% or more similarity.

I'm wondering if there's any way, probably at file system level, for the same images to be compressed together.
Maybe deduplication?
Have any of you guys handled a similar situation?

you are viewing a single comment's thread
view the rest of the comments
[–] Nibodhika 14 points 2 months ago

This will be almost impossible. The short answer is that those pictures might be 95% similar but their binary data might be 100% different.

Long answer:

Images are essentially a long list of pixels, each pixel is 3 numbers for Red, Green and Blue (and optionally Alpha if you're dealing with a transparent image, but you're talking pictures so I'll ignore that). This is a simple but very stupid way to store the data of an image, because it's very likely that the image will use the same color in multiple places, so you can instead list all of the colors a image uses, and then represent the pixels as the number in that list, this makes images occupy a LOT less space. Some formats add to that, because your eye can't see the difference between two very close colors, they group all colors that are similar into one only color, making their list of colors used on the image WAY smaller, thus having the entire image be a LOT more compressed (but you might noticed we lost information in this step). Because of this it's possible that one image choose color X in position Y, while the other choose Z in position W, the binaries are now completely different, but an image comparison tool can tell you that color X and Z are similar enough to be the same, and they account for a given percentage of the image depending on the amount minimum of the values Y and W. But outside of image software, nothing else knows that these two completely different binaries are the same. If you hadn't loss data by compressing get images in the first place you could theoretically use data from different images to compress (but the results wouldn't be great, since even uncompressed images won't be as similar as you think), but images can be compressed a LOT more by losing unimportant data so the tradeoffs are not worth it, which is why JPEG is so ubiquitous nowadays.

All of that being said, a compression algorithm specifically designed for images could take advantage of this, but no general purpose compression can, and it's unlikely someone went to the trouble of building a compression for this specific case, when each image is already compressed there's little to be gained by writing something that takes colors from multiple images in consideration, needing to decide if an image is similar enough to be bundled in together with that group or not, etc. This is an interesting question, and I wouldn't br surprised to know that Google has one such algorithm to store all images you snap together that it can already know will be sequential. But for home NAS I think it's unlikely you'll find something.

Besides all of this, storage is cheap, just buy an extra disk and move over some files there, that's likely to be your best way forward anyways.