this post was submitted on 27 May 2024
1102 points (98.0% liked)
Technology
60086 readers
5161 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Genuine question: do you know that's what happened? This type of implementation can suggest things like this without it having to be in the training data in that format.
In this case, it seems pretty likely. We know Google paid Reddit to train on their data, and the result used the exact same measurement from this comment suggesting putting Elmer’s glue in the pizza:
https://old.reddit.com/r/Pizza/comments/1a19s0/my_cheese_slides_off_the_pizza_too_easily/
And their deal with Reddit: https://www.cbsnews.com/news/google-reddit-60-million-deal-ai-training/
It's going to be hilarious to see these companies eventually abandon Reddit because it's giving them awful results, and then they're completely fucked
This doesn't mean that there are reddit comments suggesting putting glue on pizza or even eating glue. It just means that the implementation of Google's LLM is half baked and built it's model in a weird way.
I literally linked you to the Reddit comment, and pointed out that Google’s response used the same measurements as the comment
Are you an LLM?
Oh, hah sorry! thanks, I didn't realise that the reddit link pointed to the glue thing
Yes