this post was submitted on 01 Aug 2023
11 points (100.0% liked)

BecomeMe

753 readers
1 users here now

Social Experiment. Become Me. What I see, you see.

founded 1 year ago
MODERATORS
top 4 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 1 year ago (1 children)

*"The first technique is called an encoder attack. PhotoGuard adds imperceptible signals to the image so that the AI model interprets it as something else. For example, these signals could cause the AI to categorize an image of, say, Trevor Noah as a block of pure gray. As a result, any attempt to use Stable Diffusion to edit Noah into other situations would look unconvincing.

The second, more effective technique is called a diffusion attack. It disrupts the way the AI models generate images, essentially by encoding them with secret signals that alter how they’re processed by the model. By adding these signals to an image of Trevor Noah, the team managed to manipulate the diffusion model to ignore its prompt and generate the image the researchers wanted. As a result, any AI-edited images of Noah would just look gray." *

What about copyright? I don't see why as a copyright holder of an image we can't just forbid them from using the image and if they do, then we have grounds to sue. Why are we ok with just letting companies steal and "use" our work? You wouldn't let them just create a new ad with your image. And I would argue that taking someone's face and superimposing it onto porn is not fair use, but hey, I am definitely not a lawyer, just someone dumping thoughts.

[–] [email protected] 1 points 1 year ago

Because they right now AI companies have deep pockets and some amount of goodwill from the tech community just because of the massive potential they hold, so they're given a free pass. It will take a while for the more privacy minded countries to come around and reign these companies in, and establish precedents to sue these companies if they use copyrighted material.

[–] [email protected] 3 points 1 year ago
  • PhotoGuard is a tool that prevents AI manipulation of images.
  • It works by altering photos in invisible ways, making them unrealistic or distorted.
  • PhotoGuard can help prevent female selfies from turning into pornography.
  • The need to detect and stop AI manipulation has become more urgent due to the development of generative AI.
  • PhotoGuard complements the watermarking method by preventing AI from changing images.
  • An MIT team has developed two methods to stop AI image editing.
  • PhotoGuard does not provide complete protection against deep forgeries.
  • Tech companies are developing new AI models that can bypass any new defenses.
[–] [email protected] 2 points 1 year ago

Wouldn't it be trivial to run the image through a preprocessing function to remove the encoded bits?