this post was submitted on 17 Nov 2023
534 points (99.1% liked)

Technology

59665 readers
3602 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
534
submitted 1 year ago* (last edited 1 year ago) by ChunkMcHorkle to c/technology
 

Sam Altman has been fired as CEO of OpenAI, the company announced on Friday.

“Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities,” the company said in its blog post.

EDITED TO ADD direct link to OpenAI board announcement:
https://openai.com/blog/openai-announces-leadership-transition

you are viewing a single comment's thread
view the rest of the comments
[–] kromem 25 points 1 year ago

because there seems to be a diminishing return in training/inference power to usefulness

Be careful not to be caught up in the application of Goodhart's Law going on in the field right now.

There's plenty of things GPT-4 trounces everything else on, they just tend to be things outside the now standardized body of tests, which suggests the tests have become the target and are no longer effective measurements.

This is perhaps most apparent in things like Orca, where we directly use the tests as the target, have GPT-4 generate synthetic data that improves Llama performance on the target, and then see large gains in smaller models on the tests.

But those new models don't necessarily have the same capabilities on more abstract capabilities, such as the recent approach of using analogy to solve problems.

We are arguably becoming too myopic in how we are measuring the success of new models.