this post was submitted on 30 Jan 2024
505 points (93.5% liked)

Technology

59715 readers
6102 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 108 points 10 months ago (4 children)

So what actually happened seems to be this.

  • a user was exposed to another users conversation.

thats a big ooof and really shouldn’t happen

  • the conversations that where exposed contained sensitive userinformation

unresponsible user error, everyone and their mom should know better by now

[–] foggy 22 points 10 months ago (1 children)

Yeah you gotta treat chat GPT like it's a public GitHub repository.

[–] pirat 1 points 10 months ago
[–] [email protected] 6 points 10 months ago (5 children)

Why is it that whenever a corporation loses or otherwise leaks sensitive user data that was their responsibility to keep private, all of Lemmy comes out to comment about how it's the users who are idiots?

Except it's never just about that. Every comment has to make it known that they would never allow that to happen to them because they're super smart. It's honestly one of the most self-righteous, tone deaf takes I see on here.

[–] [email protected] 10 points 10 months ago

I don't support calling people idiots, but here's that: we can't control whether corporations leak our data or not, but we can control whether we share our password with ChatGPT or not.

[–] [email protected] 7 points 10 months ago

Because that's what the last several reported "breaches" have been. There's been a lot of accounts that were compromised by an unrelated breach, but the users re-used the passwords for multiple accounts.

In this case, ChatGPT clearly tells you not to give it any sensitive information, so giving it sensitive information is on the user.

[–] [email protected] 5 points 10 months ago

Data loss or leaks may not be the end user's fault, but it is their responsibility. Yes, open AI should have had shit in place for this to never have happened. Unfortunately, you, I, and the users whose passwords were leaked have no way of knowing what kinds of safeguards on my data they have in place.

The only point of access to my information that I can control completely is what I do with it. If someone says "hey, don't do that with your password" they're saying it's a potential safety issue. You're putting control of your account in the hands of some entity you don't know. If it's revealed, well, it's THEIR fault, but you also goofed and should take responsibility for it.

[–] stoly 4 points 10 months ago (1 children)

Because people who come to Lemmy tend to be more technical and better on questions of security than the average population. For most people around here, much of this is obvious and we're all tired of hearing this story over and over while the public learns nothing.

[–] [email protected] 0 points 10 months ago* (last edited 10 months ago) (1 children)

Your frustration is valid. Also calling people stupid is an easy mistake that a lot of prople make, its easy to do.

[–] stoly 3 points 10 months ago (1 children)

Well I'd never use the term to describe a person--it's unnecessarily loaded. Ignorant, naive, etc might be better.

[–] [email protected] 2 points 10 months ago* (last edited 10 months ago)

Good to hear, I dont know what ment to say but it lools like I accedently (and reductively) summerized your point while being argumentitive. 🫤 oops.

[–] [email protected] 2 points 10 months ago* (last edited 10 months ago)

To be fair i think many ai user including myself have at times overshared beyond what is advised. I never stated to be flawless but that doesn't absolve responsibility.

I do the same oversharing here on lemmy. But what i indeed don’t do is sharing real login information, real name, ssn or adress

Open ai is absolutely still to blame For leaking users conversations but even if it wasn’t leaked that data will be used for training and should never have been put in a prompt.

[–] Rand0mA 0 points 10 months ago* (last edited 10 months ago)

Maybe it has something to do with being retrained/finetuned on conversations its having