I mean, it could be a manual photoshop job. Just because it’s not AI doesn’t mean it’s real.
But also the detector is probably wrong - it’s likely an AI image using a different model than the detector was trained to detect.
Welcome to Lemmy Shitpost. Here you can shitpost to your hearts content.
Anything and everything goes. Memes, Jokes, Vents and Banter. Though we still have to comply with lemmy.world instance rules. So behave!
1. Be Respectful
Refrain from using harmful language pertaining to a protected characteristic: e.g. race, gender, sexuality, disability or religion.
Refrain from being argumentative when responding or commenting to posts/replies. Personal attacks are not welcome here.
...
2. No Illegal Content
Content that violates the law. Any post/comment found to be in breach of common law will be removed and given to the authorities if required.
That means:
-No promoting violence/threats against any individuals
-No CSA content or Revenge Porn
-No sharing private/personal information (Doxxing)
...
3. No Spam
Posting the same post, no matter the intent is against the rules.
-If you have posted content, please refrain from re-posting said content within this community.
-Do not spam posts with intent to harass, annoy, bully, advertise, scam or harm this community.
-No posting Scams/Advertisements/Phishing Links/IP Grabbers
-No Bots, Bots will be banned from the community.
...
4. No Porn/Explicit
Content
-Do not post explicit content. Lemmy.World is not the instance for NSFW content.
-Do not post Gore or Shock Content.
...
5. No Enciting Harassment,
Brigading, Doxxing or Witch Hunts
-Do not Brigade other Communities
-No calls to action against other communities/users within Lemmy or outside of Lemmy.
-No Witch Hunts against users/communities.
-No content that harasses members within or outside of the community.
...
6. NSFW should be behind NSFW tags.
-Content that is NSFW should be behind NSFW tags.
-Content that might be distressing should be kept behind NSFW tags.
...
If you see content that is a breach of the rules, please flag and report the comment and a moderator will take action where they can.
Also check out:
Partnered Communities:
1.Memes
10.LinuxMemes (Linux themed memes)
Reach out to
All communities included on the sidebar are to be made in compliance with the instance rules. Striker
I mean, it could be a manual photoshop job. Just because it’s not AI doesn’t mean it’s real.
But also the detector is probably wrong - it’s likely an AI image using a different model than the detector was trained to detect.
There were a lot of really good images like that well before AI. Anyone remember Photoshop Friday?
There's a sort of... Sheen, to a lot AI images. Obviously you can prompt this away if you know what you're doing, but its developing a bit of a look to my eye when people don't do that.
Can we bring that back?
I mean, it could be a manual photoshop job.
It could, but the double spiral in the shell indicates AI to me. Snail shells don't grow like that. If it was a manual job, they would have used a picture of a real shell.
Edit: plus the cat head looks weird where it connects to the head, and the markings don't look right to me.
Agreed. The aggressive depth of field is another smoking gun that usually indicates an AI image.
Snail shells don't grow like that but this is clearly a snat, not a snail.
Even cnailshells would have to adhere to the basic laws of conchology though
Also the fact that the grain on the side of the shell is perpendicular to the grain on the top, and it changes where the cat ear comes up in front of it.
Very telltale sign of AI is a change of pattern in something when a foreground object splits it.
Not saying it's always a guarantee, but it's a common quirk and it's pretty easy to identify.
I can tell from some of the pixels and from seeing quite a few shops in my time.
Your reference is ancient and dusty and it makes me feel old. Stop it.
…you know people made fake pictures before image generation, right?
They made fake pictures before computers existed too.
This obviously can’t be true, how did they do it without Photoshop? /s
Miao
Ignorant Americans, never even heard of the common snailcat
Where the fuck are you from that they aren’t called catsnails? Odd. Been catsnails here since I can remember.
I don't get it. Maybe it's right? Maybe a human made this?
The picture doesn't have to be "real", it just has to be non-AI. Maybe this was made in Blender and Photoshop or something.
Or maybe your expectations from ai detection are too high.
I have two of those cats. I still can't catch them when its time to go to bed.
No spooky eyes, no extra limbs, no eery smile? - 100% real, genuine photograph! 👍
It's a snat. They are not easy to catch, because they are fast. Also, they never land on their shell.
cute snat...
That's clearly a cail
Are we all looking at the same snussy?
We get them a lot around here. They don't make for good pets, but they keep the borogoves at bay.
Which is great, honestly. Borogoves themselves are fine, but it's not worth the risk letting them get all mimsy.
Such a cute kitty snail! Can you post just the picture?
Its an "AI generated image at snail" :3
Detection for snail is positive.
I wanted to get a cat but I discovered I was allergic to the slime trail.
yeah just shopped
It's not, look at the shell
That's a normal housecat. Not sure what people are confused about
Clearly AI is on the verge of taking over the world.
Well duh it detects AI generated images that are at scale and that snail cat is way too small for it
It has been 0
days since classified military gene research has been leaked by interrogating ai detecting models
My guess is the AI was trained on a combination of cat videos and sponge bob.
That is a weird looking rabbit
honestly, its pretty good, and it still works if I use a lower resolution screenshot without metadata (I haven't tried adding noise, or overlaying something else but those might break it). This is pixelwave, not midjourney though.
So only 2% "not likely to be AI-generated or deepfake"... that means that it's almost definitely AI, got it!? :-P
There are a bunch of reasons why this could happen. First, it's possible to "attack" some simpler image classification models; if you get a large enough sample of their outputs, you can mathematically derive a way to process any image such that it won't be correctly identified. There have also been reports that even simpler processing, such as blending a real photo of a wall with a synthetic image at very low percent, can trip up detectors that haven't been trained to be more discerning. But it's all in how you construct the training dataset, and I don't think any of this is a good enough reason to give up on using machine learning for synthetic media detection in general; in fact this example gives me the idea of using autogenerated captions as an additional input to the classification model. The challenge there, as in general, is trying to keep such a model from assuming that all anime is synthetic, since "AI artists" seem to be overly focused on anime and related styles...