this post was submitted on 04 Sep 2024
185 points (96.0% liked)

ChatGPT

8947 readers
1 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 37 points 2 months ago (1 children)

Are we maybe talking about 57% of newly created content? Because I also have a very hard time believing that LLM generated content already surpassed the entire last few decades of accumulated content on the internet.

[–] [email protected] 15 points 2 months ago* (last edited 2 months ago)

I'm too dumb to understand the paper, but it doesn't feel unlikely that this is a misinterpretation.

What I've figured out:

  • They're exclusively looking at text.
  • Translations are an important factor. Lots of English content is taken and (badly) machine-translated into other languages to grift ad money.

What I can't quite figure out:

  • Do they only look at translated content?
  • Is their dataset actually representative of the whole web?

The actual quote from the paper is:

Of the 6.38B sentences in our 2.19B translation tuples, 3.63B (57.1%) are in multi-way parallel (3+ languages) tuples

And "multi-way parallel" means translated into multiple languages:

The more languages a sentence has been translated into (“Multi-way Parallelism”)

But yeah, no idea, what their "translation tuples" actually contain. They seem to do some deduplication of sentences, too. In general, it very much feels like just quoting those 57.1% without any of the context, is just a massive oversimplification.