this post was submitted on 26 Jul 2023
162 points (98.2% liked)
A Boring Dystopia
9756 readers
1816 users here now
Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.
Rules (Subject to Change)
--Be a Decent Human Being
--Posting news articles: include the source name and exact title from article in your post title
--Posts must have something to do with the topic
--Zero tolerance for Racism/Sexism/Ableism/etc.
--No NSFW content
--Abide by the rules of lemmy.world
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Lemme get this straight: DARPA researches fabricated a series of words that signaled emotional states. And then, they, the DARPA researchers classified the series of words with the emotional states for the AI to train on (zero-shot classification). And then they hope to leverage the trained AI to identify "social emotions"?
Everything about this is fucking stupid.
The GPT-3 prompt could've been: "What are some sentences a shameful socialist/conservative/anarchist/terrorist/etc protestor/litterer/murderer/liar/etc might use?", implicitly connecting shame a particular ideology. As such, social emotions signals more emotions by their method of generation and classification.
Suddenly, some random person is being targeted for having fucked up and they're like, "Wtf did I do? Yes, I did shoplift from Target, but it was like a $20 shirt because my job at Wal-Mart makes me use food stamps to make ends meet. Fuck off!"
The AI automatically detects another violation of social norms.
And you're like, "That's an edge case...". Yeah, sure, but it's DARPA, we're talking about here. That should be enough said.
It sounds like they’re focusing on the shame associated, which leads to the irony of they’ll find the awkward and uncomfortable ones but not the ardent. That sounds unwise
it's long been the case that to get away with a crime, you make it your business.
The worst part is, ChatGPT cannot generate anything new. It's pre-trained, which is the P in the name.
It can only recombine the training data into forms that sort of match the training data. So, if the training data is garbage, the output will be more garbage.
And this garbage in garbage out is going to be used to harm real people.
In addition, ChatGPT lies. It hallucinates shit that is provably false, because that's what it's generated text needs to look like to match the training data.
So it will likely lead to a bunch of false positives, because the positive response better matches the training data.