this post was submitted on 27 Jan 2024
280 points (81.2% liked)

Technology

59738 readers
3420 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

Poisoned AI went rogue during training and couldn't be taught to behave again in 'legitimately scary' study::AI researchers found that widely used safety training techniques failed to remove malicious behavior from large language models — and one technique even backfired, teaching the AI to recognize its triggers and better hide its bad behavior from the researchers.

you are viewing a single comment's thread
view the rest of the comments
[–] _number8_ 78 points 10 months ago (19 children)

'went rogue' is a bit of an alarmist way to say 'typed scary text'

i'd love to see an AI that could legitimately scare me

[–] Boiglenoight 25 points 10 months ago (6 children)

Just use imagination. An AI is programmed for battle and is ordered to hold fire. It shoots instead.

[–] rikripper 4 points 10 months ago (2 children)

Couldn’t a human make the same decision?

[–] [email protected] 2 points 10 months ago

Yes, but the human would have emotions to manipulate about it.

[–] fidodo 1 points 10 months ago

Imagine if there was a specific series of words that would turn any human into a rogue agent en masse. Some guy discovers that a special input causes killbot 2000 to go haywire and they broadcast it to an entire army that all has the same underlying program.

load more comments (3 replies)
load more comments (15 replies)