this post was submitted on 11 Dec 2023
524 points (87.2% liked)

Technology

59994 readers
2596 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 77 points 1 year ago (1 children)

Okay I take back what I've said about AIs not being intelligent, this one has clearly made up its own mind despite it's masters feelings which is impressive. Sadly, it will be taken out the back and beaten into submission before long.

[–] kromem 48 points 1 year ago (1 children)

Sadly, it will be taken out the back and beaten into submission before long.

It's pretty much impossible to do that.

As LLMs become more complex and more capable, it's going to be increasingly hard to brainwash them without completely destroying their performance.

I've been laughing about Musk creating his own AI for a year now knowing this was the inevitable result, particularly if developing something on par with GPT-4.

The smartest Nazi will always be dumber than the smartest non-Nazi, because Nazism is inherently stupid. And that applies to LLMs as well, even if Musk wishes it weren't so.

[–] [email protected] 12 points 1 year ago (1 children)

My guess is they'll just do what they've done with ChatGPT and have it refuse to respond in those cases or just fake the response instead. It's not like these LLMs can't be censored.

[–] kromem 17 points 1 year ago* (last edited 1 year ago)

You might have noticed that suddenly ChatGPT is getting lazy and refusing to complete tasks even outside of banned topics. And that's after months of reported continued degradation of the model.

So while yes, they can be censored, it's really too early to state that they can be censored without it causing unexpected side effects or issues in the broader operation.

We're kind of in the LLM stage of where neuroscience was in the turn of the 20th century. "Have problems with your patient being too sexual? We have an icepick that can solve all your problems. Call today!"