The only option I can think of to avoid it is to sell your router and computers and move to a cabin in remote Montana- I’m seriously considering it at this point
Firefox
A community for discussion about Mozilla Firefox.
How would the extension be able to tell?
We need an AI for that
It exists, it just doesn't work very well. It's enough to convert the generated AI image into jpeg and the jpeg compression, even if imperceptible to us, will cause the AI to stop recognizing the image as AI generated. It also costs money if you want to use it on this scale because it's computationally intensive.
Plus, any discriminator model you train for this task, if successful, would immediately be used as a mechanism for training AI to generate more realistic images to deceive it.
AI: A person with seven fingers and a knee that bends the wrong way? Looks good to me!
There was a recent witch hunt on the terraria subreddit because an artist posted fan art and the character had 6 fingers on one hand. It turned out to just be a mistake (OP showed proof of the drawing process).
You can't tell the difference and anyone who says they can with 100% certainty is not being honest.
It'd be able to tell as easily as all other AI-recognition software is able to tell.
You can tell by the way it is.
A lot of AI ""art"" is tagged as such, and there are websites that exist for the sole purpose of generating AI art. I am aware that there are google commands to block certain sites, but I was hoping that there was also an extension that at least filters out pictures that are openly tagged as AI generated. Stuff like this for example.
I suppose someone could make filters that block art credited to AI, Dalle, Midjourney etc...
iNaturalist can do this! Also eBird
iNaturalist released their automated species identification tool in 2017. It's not entirely clear since XKCDs aren't datestamped, but going by archive.org the page has been indexed since at least 2014. Assuming archive.org indexed it shortly after it was posted, iNaturalist got the research done two years ahead of schedule!
Not technically possible.
I use Stable Diffusion to generate images, and that will put information in EXIF tags in the image. However:
-
Your browser would have to download the images -- not thumbnails -- from the image search engine. In theory, I guess image search engines could provide a way to exclude ones that are explicitly flagging themselves in that way, but it'd be slow for your browser to do it.
-
There is no guarantee that all AI image generation software will do the same.
-
If anyone has modified it, made a derived image in Photoshop or GIMP or whatever, it won't have the EXIF data.
I doubt that there will ever be a reliable way to detect AI-generated images just from the image data. You could make something that works for some generators today, but they'll be heavily tied to the model and generator, and they'll break down as things change...and it's a fast-moving field.
In the long run, it might be a better shot to try to identify images that are not AI-generated than those that are. Like, have cameras cryptographically sign the images they take or something, because it can be useful to show that an image is actually a legit photograph. But that'll restrict what you're finding, exclude images that one might want to keep.
For a solution that will become increasingly-less-viable, one might be able to cobble together something with Tineye. It can look for similar images, and images that existed prior to widespread use of generative AI probably weren't generated with it. But over time, more and more of the images out there will have been done after the rise of generative AI.
Hm ...dang. I was hoping that there would at least be some sort of filter to remove pictures that are directly / openly tagged as AI art from searches, but I guess that would rely too much on the goodwill of the uploader to even use tags like these in the first place.
As soon as the extension exists, the websites will remove the tags, no site is going to willingly let themselves be filtered out of Google results, etc for something entirely in their control
AI art in its current form is an incredibly new phenomenon, maybe about a year or two old tops. If openAI have admitted there's no reliable way to detect chatGPT output, AI art is equally inscrutable to automated filtering.
tl;dr the world has changed again, we adapt
I mean, it wouldn't hurt to have it, but I doubt that it'd be too efficacious.
I've uploaded AI-generated images I've generated to two communities on lemmy that are focused on AI-generated imagery. My original image has the tags, but lemmy has built-in image hosting, which most people there use. It strips tags off the image (I assume to try to avoid people inadvertently doxxing themselves from photographs when they upload images taken where the camera embeds GPS location data in EXIF tags, which has been something of an issue in the past). Imgur also strips tags from uploaded images. I wouldn't be surprised if most image-hosting services probably do the same.
It's also a problem from a standpoint of AI training, because one of the more-significant issues that comes up is that you don't want to train AIs on AI-generated images, and people training AIs have no good way to filter them out either.
I have been very interested in an extension like this myself recently, and I believe it is fully possible.
Isitai.com is currently working on a paid extension that allows to to spot AI images directly in your browser. Though it is not meant to block them before they show up.
An aggregated common block list might be necessary to make what we want happen. Or a desktop-app that scans for AI signature patterns in the entire frame buffer.
I've noticed that Pinterest is now so over-stuffed with falsely attributed AI-art that it is spilling over to Facebook communities, where people will celebrate water colors and oil paintings attributed to painters who never did them.
The first thing an AI-blocker extension should do is thus to cross reference all images with Pinterest, and just NOT show anything that is available there. Harsh, but the user might then choose to reveal a hidden image on a case-by-case basis. Deviant art is another community that is now chuck full of AI images. People don't repost from there very often, as artists there tend to claim ownership of the images themselves, but it might be nice to block out their entire catalogue as well.
Ironically, one of the reasons AI imagery doesn't have voluntary tagging is because of the anti-AI sentiment. It's resulted in anti-AI witchhunts and abuse. So why should people flag themselves and paint a target on their backs like that? Since voluntary tagging is basically the only way to know if something's AI generated, the extreme anti-AI folks have dug their own grave on this one.
PS, my avatar is AI-generated imagery.