this post was submitted on 28 Jul 2023
468 points (93.8% liked)
Technology
60123 readers
2711 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Predictable issue if you knew the fundamental technology that goes into these models. Hell it should have been obvious it was headed this way to the layperson once they saw the videos and heard the audio.
We're less sensitive to patterns in massive data, the point at which we cant tell fact from ai fiction from the content is before these machines can't tell. Good luck with the FB aunt's.
GANs final goal is to develop content that is indistinguishable... Are we surprised?
Edit since the person below me made a great point. GANs may be limited but there's nothing that says you can't setup a generator and detector llm with the distinct intent to make detectors and generators for the sole purpose of improving the generator.
For laymen who might not know how GANs work:
Two AI are developed at the same time. One that generates and one that discriminates. The generator creates a dataset, it gets mixed in with some real data, then that all of that gets fed into the discriminator whose job is to say "fake or not".
Both AI get better at what they do over time. This arms race creates more convincing generated data over time. You know your generator has reached peak performance when its twin discriminator has a 50/50 success rate. It's just guessing at that point.
There literally cannot be a better AI than the twin discriminator at detecting that generator's work. So anyone trying to make tools to detect chatGPT's writing is going to have a very hard time of it.
Fantastically put!
Tx!
Unless I'm mistaken, aren't GANs mostly old news? Most of the current SOTA image generation models and LLMs are either diffusion-based, transformers, or both. GANs can still generate some pretty darn impressive images, even from a few years ago, but they proved hard to steer and were often trained to generate a single kind of image.
I haven't been in decision analytics for a while (and people smarter than I are working on the problem) but I meant more along the lines of the "model collapse" issue. Just because a human gives a thumbs up or down doesn't make it human written training data to be fed back. Eventually the stuff it outputs becomes "most likely prompt response that this user will thumbs up and accept". (Note: I'm assuming the thumbs up or down have been pulled back into model feedback).
Per my understanding that's not going to remove the core issue which is this:
Any sort of AI detection arms race is doomed. There is ALWAYS new 'real' video for training and even if GANs are a bit outmoded, the core concept of using synthetically generated content to train is a hot thing right now. Technically whomever creates a fake video(s) to train would have a bigger training set than the checkers.
Since we see model collapse when we feed too much of this back to the model we're in a bit of an odd place.
We've not even had a LLM available for the entire year but we're already having trouble distinguishing.
Making waffles so I only did a light google but I don't really think chatgpt is leveraging GANs for it's main algos, simply that the GAN concept could be applied easily to LLM text to further make delineation hard.
We're probably going to need a lot more tests and interviews on critical reasoning and logic skills. Which is probably how it should have been but it'll be weird as that happens.
sorry if grammar is fuckt - waffles
So a few tidbits you reminded me of:
You're absolutely right: there's what's called an alignment problem between what the human thinks looks superficially like a quality answer and what would actually be a quality answer.
You're correct in that it will always be somewhat of an arms race to detect generated content, as lossy compression and metadata scrubbing can do a lot to make an image unrecognizable to detectors. A few people are trying to create some sort of integrity check for media files, but it would create more privacy issues than it would solve.
We've had LLMs for quite some time now. I think the most notable release in recent history, aside from ChatGPT, was GPT2 in 2019, as it introduced a lot of people to to the concept. It was one of the first language models that was truly "large," although they've gotten much bigger since the release of GPT3 in 2020. RLHF and the focus on fine-tuning for chat and instructability wasn't really a thing until the past year.
Retraining image models on generated imagery does seem to cause problems, but I've noticed fewer issues when people have trained FOSS LLMs on text from OpenAI. In fact, it seems to be a relatively popular way to build training or fine-tuning datasets. Perhaps training a model from scratch could present issues, but generally speaking, training a new model on generated text seems to be less of a problem.
Critical reading and thinking was always a requirement, as I believe you say, but certainly it's something needed for interpreting the output of LLMs in a factual context. I don't really see LLMs themselves outperforming humans on reasoning at this stage, but the text they generate certainly will make those human traits more of a necessity.
Most of the text models released by OpenAI are so-called "Generative Pretrained Transformer" models, with the keyword being "transformer." Transformers are a separate model architecture from GANs, but are certainly similar in more than a few ways.
Here is an alternative Piped link(s): https://piped.video/viJt_DXTfwA?t=980
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I'm open-source, check me out at GitHub.
These all align with my understanding! Only thing I'd mention is that when I said "we've not had llms available" I meant "LLMs this powerful ready for public usage". My b
Yeah, that's fair. The early versions GPT3 kinda sucked compared to what we have now. For example, it basically couldn't rhyme. RLHF or some of the more recent advanced seemed to turbocharge that aspect of LLMs.