this post was submitted on 15 May 2024
638 points (98.9% liked)

TechTakes

1483 readers
234 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 117 points 7 months ago (152 children)

What I find delightful about this is that I already wasn't impressed! Because, as the paper goes on to say

Moreover, although the UBE is a closed-book exam for humans, GPT-4’s huge training corpus largely distilled in its parameters means that it can effectively take the UBE “open-book”

And here I was thinking it not getting a perfect score on multiple-choice questions was already damning. But apparently it doesn't even get a particularly good score!

[–] [email protected] 23 points 7 months ago (58 children)

That’s like saying a person reading a book before a quiz is doing it open book because they have the memory of reading that book.

[–] [email protected] 21 points 7 months ago* (last edited 7 months ago) (14 children)

I'm not even going to engage in this thread cause it's a tar pit, but I do think I have the appropriate analogy.

When taking certain exams in my CS programme you were allowed to have notes but with two restrictions:

  1. Have to be handwritten;
  2. have to fit on a single A4 page.

The idea was that you needed to actually put a lot of work into making it, since the entire material was obviously the size of a fucking book and not an A4 page, and you couldn't just print/copy it from somewhere. So you really needed to distill the information and make a thought map or an index for yourself.

Compare that to an ML model that is allowed to train on data however long it wants, as long as the result is a fixed-dimension matrix with parameters that helps it answer questions with high reliability.

It's not the same as an open book, but it's definitely not closed book either. And the LLMs have billions of parameters in the matrix, literal gigabytes of data on their notes. The entire text of War and Peace is ~3MB for comparison. An LLM is a library of trained notes.

load more comments (12 replies)
load more comments (55 replies)
load more comments (148 replies)