this post was submitted on 05 Sep 2024
40 points (100.0% liked)

Selfhosted

40170 readers
554 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS
 

So, I'm selfhosting immich, the issue is we tend to take a lot of pictures of the same scene/thing to later pick the best, and well, we can have 5~10 photos which are basically duplicates but not quite.
Some duplicate finding programs put those images at 95% or more similarity.

I'm wondering if there's any way, probably at file system level, for the same images to be compressed together.
Maybe deduplication?
Have any of you guys handled a similar situation?

top 50 comments
sorted by: hot top controversial new old
[–] Nibodhika 14 points 2 months ago

This will be almost impossible. The short answer is that those pictures might be 95% similar but their binary data might be 100% different.

Long answer:

Images are essentially a long list of pixels, each pixel is 3 numbers for Red, Green and Blue (and optionally Alpha if you're dealing with a transparent image, but you're talking pictures so I'll ignore that). This is a simple but very stupid way to store the data of an image, because it's very likely that the image will use the same color in multiple places, so you can instead list all of the colors a image uses, and then represent the pixels as the number in that list, this makes images occupy a LOT less space. Some formats add to that, because your eye can't see the difference between two very close colors, they group all colors that are similar into one only color, making their list of colors used on the image WAY smaller, thus having the entire image be a LOT more compressed (but you might noticed we lost information in this step). Because of this it's possible that one image choose color X in position Y, while the other choose Z in position W, the binaries are now completely different, but an image comparison tool can tell you that color X and Z are similar enough to be the same, and they account for a given percentage of the image depending on the amount minimum of the values Y and W. But outside of image software, nothing else knows that these two completely different binaries are the same. If you hadn't loss data by compressing get images in the first place you could theoretically use data from different images to compress (but the results wouldn't be great, since even uncompressed images won't be as similar as you think), but images can be compressed a LOT more by losing unimportant data so the tradeoffs are not worth it, which is why JPEG is so ubiquitous nowadays.

All of that being said, a compression algorithm specifically designed for images could take advantage of this, but no general purpose compression can, and it's unlikely someone went to the trouble of building a compression for this specific case, when each image is already compressed there's little to be gained by writing something that takes colors from multiple images in consideration, needing to decide if an image is similar enough to be bundled in together with that group or not, etc. This is an interesting question, and I wouldn't br surprised to know that Google has one such algorithm to store all images you snap together that it can already know will be sequential. But for home NAS I think it's unlikely you'll find something.

Besides all of this, storage is cheap, just buy an extra disk and move over some files there, that's likely to be your best way forward anyways.

[–] [email protected] 10 points 2 months ago

No, it is impossible to solve this on filesystem level. In theory, it would be possible to adopt some video codec for compression of such photo series, but it would be a lot of work to integrate it into immich.

[–] NeoNachtwaechter 9 points 2 months ago* (last edited 2 months ago)

we can have 5~10 photos which are basically duplicates

Have any of you guys handled a similar situation?

I decide which one is the best and then delete the others. Sometimes I keep 2, but that's an exception. I do that as early as possible.

I don't mind about storage space at all (still many TB free), but keeping (near-)duplicates costs valuable time of my life. Therefore I avoid it.

[–] carl_dungeon 7 points 2 months ago

No that’s really not possible. I’d recommend tossing the similar ones after you pick the “best”.

[–] [email protected] 5 points 2 months ago (2 children)

Not sure if you're aware, but Immich has a duplicate finder

[–] [email protected] 2 points 2 months ago (2 children)

From what I understand OP's images aren't the same image, just very similar.

[–] [email protected] 1 points 2 months ago

You can also adjust the threshold however that's probably not a great idea unless you manually want to accept/reject the duplicates.

[–] [email protected] 0 points 2 months ago (1 children)

Yeah, the duplicate finder uses a neural network to find duplicates I think. I went through my wedding album that had a lot of burst shots and it was able to detect similar images well.

[–] [email protected] 1 points 2 months ago (1 children)

Would be surprised if there is any AI involved. Finding duplicates is a solved problem.

AI is only involved in object detection and face recognition.

[–] [email protected] 1 points 2 months ago (1 children)

I wasn't sure if it was AI or not. According to the description on GitHub:

Utilizes state-of-the-art algorithms to identify duplicates with precision based on hashing values and FAISS Vector Database using ResNet152.

Isn't ResNet152 a neural network model? I was careful to say neural network instead of AI or machine learning.

[–] [email protected] 1 points 2 months ago

Thanks for that link.

AI is the umbrella term for ML, neural networks, etc.

ResNet152 seems to be used only to recognice objects in the image to help when comparing images. I was not aware of that and i am not sure if i would classify it as actuall tool for image deduplication, but i have not looked at the code to determine how much they are doing with it.

As of now they still state that they want to use ML technologies in the future to help, so they either forgot to edit the readme or they do not use it.

[–] [email protected] 1 points 2 months ago

And immich-go can run one via cli

[–] [email protected] 5 points 2 months ago* (last edited 2 months ago) (1 children)

Not sure if a de-duplicating filesystem would help with that or not. Depends, I guess, on if there are similarities between the similar images at the block level.

Maybe try setting up a small, test ZFS pool, enabling de-dup, adding some similar images, and then checking the de-dupe rate? If that works, then you can plan a more permanent ZFS (or other filesystem that supports de-duplication) setup to hold your images.

[–] [email protected] 4 points 2 months ago

Storage is cheap. You suggest combining the images and storing the difference.

You can't separate the images anymore. You have to store them in a container such that you have one common base image. You can then later on decide which image to look at.

You could also take a short video and only display one image.

Avif uses a video compression algorithm, meaning it's basically one frame of a video.

Btw, I wouldn't care about your problem. Storage is cheap. Try saving 10 4k videos and you'll laugh about your image library

[–] just_another_person 4 points 2 months ago* (last edited 2 months ago) (2 children)

Well how would you know which ones you'd be okay with a program deleting or not? You're the one taking the pictures.

Deduplication checking is about files that have exactly the same data payload contents. Filesystems don't have a concept of images versus other files. They just store data objects.

[–] [email protected] 4 points 2 months ago (2 children)

I'm not saying to delete, I'm saying for the file system to save space by something similar to deduping.
If I understand correctly, deduping works by using the same data blocks for similar files, so there's no actual data loss.

[–] WhatAmLemmy 6 points 2 months ago* (last edited 2 months ago)

I believe this is what some compression algorithms do if you were to compress the similar photos into a single archive. It sounds like that's what you want (e.g. archive each day), for immich to cache the thumbnails, and only decompress them if you view the full resolution. Maybe test some algorithms like zstd against a group of similar photos vs individually?

FYI file system deduplication works based on file content hash. Only exact 1:1 binary content duplicates share the same hash.

Also, modern image and video encoding algorithms are already the most heavily optimized that computer scientists can currently achieve with consumer hardware, which is why compressing a jpg or mp4 offers negligible savings, and sometimes even increases the file size.

[–] [email protected] 1 points 2 months ago

I don't think there's anything commercially available that can do it.

However, as an experiment, you could:

  • Get a group of photos from a burst shot
  • Encode them as individual frames using a modern video codec using, eg VLC.
  • See what kind of file size you get with the resulting video output.
  • See what artifacts are introduced when you play with encoder settings.

You could probably/eventually script this kind of operation if you have software that can automatically identify and group images.

[–] [email protected] 2 points 2 months ago

Compression?

[–] [email protected] 2 points 2 months ago

When do you do the choosing? Try move that left in the process. Saving storage.

[–] [email protected] 1 points 2 months ago (1 children)

That basic idea is roughly how compression works in general. Think zip, tar, etc. files. Identify snippets of highly used byte sequences and create a “map of where each sequence is used. These methods work great on simple types of data like text files where there’s a lot of repetition. Photos have a lot more randomness and tend not to compress as well. At least not so simply.

You could apply the same methods to multiple image files but I think you’ll run into the same challenge. They won’t compress very well. So you’d have to come up with a more nuanced strategy. It’s a fascinating idea that’s worth exploring. But you’re definitely in the realm of advanced algorithms, file formats, and storage devices.

That’s apparently my long response for “the other responses are right”

[–] [email protected] 1 points 2 months ago

Yeah, the image bytes are random because they're already compressed (unless they're bitmaps, which is not likely).

[–] [email protected] 1 points 2 months ago

I went through the same dilemma. The old Synology photo software had a duplicate finder, but they removed that feature with the "new" version. But even with the duplicate finder, it wasn't very powerful and offered no adjustability.

In the end, I ended up paying for a program called "Excire Foto", which can pull images from my NAS, and can not only find duplicates in a customized and accurate way. It also has a localAI search that bests even Google Photos.

It runs from windows, saves its own database, and can be used as read-only, if you only want to make use of the search feature.

To me, it was worth the investment.

Side note: if I only had <50,000 photos, then I'd probably find a free/cheaper way to do it. At the time, I had over 150,000 images, going back to when the first digital cameras were available + hundreds of scanned negatives and traditional (film) photos, so I really didn't want to spend weeks sorting it all out!

Oh, the software can even tag your photos for subjects so that it's baked into the EXIF data (so other programs can make use of it).

[–] [email protected] 1 points 2 months ago* (last edited 2 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
Git Popular version control system, primarily for code
NAS Network-Attached Storage
ZFS Solaris/Linux filesystem focusing on data integrity

3 acronyms in this thread; the most compressed thread commented on today has 9 acronyms.

[Thread #953 for this sub, first seen 5th Sep 2024, 23:05] [FAQ] [Full list] [Contact] [Source code]

[–] rollerbang 0 points 2 months ago

File system deduplication might be best bet, though I don't know what's the potential.