this post was submitted on 08 Dec 2024
457 points (94.5% liked)

Technology

59881 readers
5148 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
457
The GPT Era Is Already Ending (www.theatlantic.com)
submitted 3 days ago* (last edited 3 days ago) by [email protected] to c/technology
 

If this is the way to superintelligence, it remains a bizarre one. “This is back to a million monkeys typing for a million years generating the works of Shakespeare,” Emily Bender told me. But OpenAI’s technology effectively crunches those years down to seconds. A company blog boasts that an o1 model scored better than most humans on a recent coding test that allowed participants to submit 50 possible solutions to each problem—but only when o1 was allowed 10,000 submissions instead. No human could come up with that many possibilities in a reasonable length of time, which is exactly the point. To OpenAI, unlimited time and resources are an advantage that its hardware-grounded models have over biology. Not even two weeks after the launch of the o1 preview, the start-up presented plans to build data centers that would each require the power generated by approximately five large nuclear reactors, enough for almost 3 million homes.

https://archive.is/xUJMG

you are viewing a single comment's thread
view the rest of the comments
[–] Buffalox 42 points 3 days ago* (last edited 3 days ago) (28 children)

It's a great article IMO, worth the read.

But :

“This is back to a million monkeys typing for a million years generating the works of Shakespeare,”

This is such a stupid analogy, the chances for said monkeys to just match a single page any full page accidentally is so slim, it's practically zero.
To just type a simple word like "stupid" which is a 6 letter word, and there are 25⁶ combinations of letters to write it, which is 244140625 combinations for that single simple word!
A page has about 2000 letters = 7,58607870346737857223e+2795 combinations. And that's disregarding punctuation and capital letters and special charecters and numbers.
A million monkeys times a million years times 365 days times 24 hours times 60 minutes times 60 seconds times 10 random typos per second is only 315360000000000000000 or 3.15e+20 combinations assuming none are repaeated. That's only 21 digits, making it 2775 digits short of creating a single page even once.

I'm so sick of seeing this analogy, because it is missing the point by an insane margin. It is extremely misleading, and completely misrepresenting getting something very complex right by chance.

To generate a work of Shakespeare by chance is impossible in the lifespan of this universe. The mathematical likelihood is so staggeringly low that it's considered impossible by AFAIK any scientific and mathematical standard.

[–] [email protected] 0 points 2 days ago* (last edited 2 days ago) (1 children)

You are missing a piece of the analogy.

After each key press the size of the letters change, so some become more likely to be hit than others.

How the size of the keys vary is the secret being sought, and this training requires many, many more monkeys than just producing Shakespeare.

[–] [email protected] 3 points 2 days ago

AI data analyst here. The above is an excellent extension of the analogy.

Now, imagine another monkey controlling how the size of the keys vary. There might even be another monkey controlling that one.

The analogy doesn't seem to break until we start talking about the assumptions humans make for efficiency.

load more comments (26 replies)