this post was submitted on 02 Jul 2023
1 points (100.0% liked)

Game Preservation

123 readers
1 users here now

founded 1 year ago
MODERATORS
 

A few months ago I was doings some reading on the theory and algorithms for decreening scans (also called inverse-halftoning) to try and understand how tools like Sattva Descreen work and what other alternate algorithmic approach exist (to try and understand what different tradeoffs are possible).

I might make a discussion post later about some of my thoughts and questions about the possible tradeoffs (mostly related to retaining line clarity), but I figured for now I should share some of the resources I found.

(All of the papers mentioned below are available online as PDFs, a search of the title and first author should find them)

Best introduction paper

"Inverse Halftoning Using Inverse Methods" (Gustavsson, 2007) gives a really nice introduction to the theory behind halftoning and discusses several of the different approaches for descreening algorithms that were in the literature at the time.

I highly recommend reading through at least Chapters 2 & 3. They are very approachable and informative.

Also the author gets brownie points from me for criticizing some papers on algorithmic approaches for not testing with actually scanned images.

Other papers

  • Inverse Halftoning Using Wavelets (Xiong wt al., 1997)
    • This paper is almost entirely math that is well outside of my knowledge, but it looks like it does something with using edge information to improve descreening.
  • Recent Advances in Digital Halftoning and Inverse Halftoning Methods (Me ̧se & Vaidyanathan, 2001)
    • This paper discusses a Look Up Table-based method for descreening (different from the typical fourier transform + Gaussian blur approach). Though I don't know how this compares to other algorithmic approaches for descreening. (another very math heavy paper)
  • Deep Joint Image Filtering (Li et al., 2016)
    • This paper discusses an interesting Convolutional Neural Newtork-based approach for descreening (and some other related processes) and is a relatively recent paper. I don't understand the math behind it, but the idea of using deep learning to pick up on the relationships between non-screened and screened versions of images sounds promising. I imagine one of the big challenges with approaches like this is getting a good training set to work with, especially making sure to have a training set comprised mostly or at least containing a lot of real scanned images (as opposed to just applying digital halftoning to images and using those digital halftones for training).
no comments (yet)
sorted by: hot top controversial new old
there doesn't seem to be anything here