this post was submitted on 10 Sep 2023
689 points (95.6% liked)

Technology

60087 readers
4420 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] sebi -1 points 1 year ago (1 children)

Because generative Neural Networks always have some random noise. Read more about it here

[–] stevedidWHAT 3 points 1 year ago (2 children)

Isn’t that article about GANs?

Isn’t GPT not a GAN?

[–] PetDinosaurs 6 points 1 year ago (1 children)

It almost certainly has some gan-like pieces.

Gans are part of the NN toolbox, like cnns and rnns and such.

Basically all commercial algorithms (not just nns, everything) are what I like to call "hybrid" methods, which means keep throwing different tools at it until things work well enough.

[–] stevedidWHAT 3 points 1 year ago* (last edited 1 year ago) (1 children)

The findings were for GAN models, not GAN like components though.

[–] PetDinosaurs 1 points 1 year ago (1 children)

It doesn't matter. Even the training process makes it pretty much impossible to tell these things apart.

And if we do find a way to distinguish, we'll immediately incorporate that into the model design in a GAN like manner, and we'll soon be unable to distinguish again.

[–] stevedidWHAT 0 points 1 year ago

Which is why hardcoded fingerprints/identifications are required to identify the individual as a speaker rather than as an AI vs Human. Which is what we’re ultimately agreeing on here outside of the pedantics of the article and scientific findings:

Trying to find the model who is supposed to be human as an AI is counter intuitive. They’re direct opposites if one works, both can’t be exist in this implementation.

The hard part will obviously be making sure that such a “fingerprint” wouldn’t be removable which will take some wild math and out of the box thinking I’m sure.

Tough problem!

[–] [email protected] 2 points 1 year ago

It's not even about diffusion models. Adversarial networks are basically obsolete