this post was submitted on 13 Jan 2024
922 points (98.7% liked)

Technology

59708 readers
5428 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

The Pentagon has its eye on the leading AI company, which this week softened its ban on military use.

you are viewing a single comment's thread
view the rest of the comments
[–] Fedizen 113 points 10 months ago (3 children)

I can't wait until we find out AI trained on military secrets is leaking military secrets.

[–] [email protected] 24 points 10 months ago (1 children)

I can't wait until people find out that you don't even need to train it on secrets, for it to "leak" secrets.

[–] Kase 6 points 10 months ago (1 children)
[–] [email protected] 7 points 10 months ago (1 children)

Language learning models are all about identifying patterns in how humans use words and copying them. Thing is that's also how people tend to do things a lot of the time. If you give the LLM enough tertiary data it may be capable of 'accidentally' (read: randomly) outputting things you don't want people to see.

[–] uranibaba 1 points 10 months ago (1 children)

But how would you know when you have this data?

[–] [email protected] 1 points 10 months ago

It may prompt people to recognizing things they had glossed over before.

[–] AeonFelis 18 points 10 months ago

In order for this to happen, someone will have to utilize that AI to make a cheatbot for War Thunder.

[–] [email protected] 14 points 10 months ago (1 children)

I mean even with chatgpt enterprise you prevent that.

It's only the consumer versions that train on your data and submissions.

Otherwise no legal team in the world would consider chatgpt or copilot.

[–] [email protected] 4 points 10 months ago

I will say that they still store and use your data some way. They just haven't been caught yet.

Anything you have to send over the internet to a server you do not control, will probably not work for a infosec minded legal team.