this post was submitted on 17 Sep 2024
125 points (95.0% liked)

science

14408 readers
67 users here now

just science related topics. please contribute

note: clickbait sources/headlines aren't liked generally. I've posted crap sources and later deleted or edit to improve after complaints. whoops, sry

Rule 1) Be kind.

lemmy.world rules: https://mastodon.world/about

I don't screen everything, lrn2scroll

founded 1 year ago
MODERATORS
 

From the article:

This chatbot experiment reveals that, contrary to popular belief, many conspiracy thinkers aren't 'too far gone' to reconsider their convictions and change their minds.

top 26 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 6 days ago

Is it a theory when we have proof? I mean it's only sort of an obvious to say that Psychiatry is no different from MKULTRA. It might be such to say that such IS such but what's the fucking difference?

Oh yeah. Psychiatry is private. MKULTRA is a weapon. Not that much of a difference either. They're both targeting wallets.

[–] [email protected] 79 points 1 week ago (1 children)

Another way of looking at it: "AI successfully used to manipulate people's opinions on certain topics." If it can persuade them to stop believing conspiracy theories, AI can also be used to make people believe conspiracy theories.

[–] davidgro 49 points 1 week ago (2 children)

Anything can be used to make people believe them. That's not new or a challenge.

I'm genuinely surprised that removing such beliefs is feasible at all though.

[–] Angry_Autist 2 points 5 days ago
  1. the person needs to have a connection to the conspiracy theorist that is stronger than the identity valence gained by adopting these conspiracies

  2. The person needs to speak emotionally and sincerely, using direct experience (cookie cutter rarely works here)

  3. The person needs to genuinely desire for the improvement of the other's life

That is the only way I have ever witnessed it personally work, and it still took weeks.

[–] SpaceNoodle 5 points 1 week ago (2 children)

If they're gullible enough to be suckered into it, they can similarly be suckered out of it - but clearly the effect would not be permanent.

[–] Zexks 2 points 1 week ago (2 children)

That doesn’t follow with the “if you didnt reason your way into a believe you can’t reason your way out” line. Considering religious ferver I’m more inclined to believe this line than yours.

[–] [email protected] 5 points 1 week ago

No one said at all that AI used "reason" to talk people out of a conspiracy theory. In fact I would assume it's incredibly unlikely since AI in general is not reasonable.

[–] SpaceNoodle 2 points 1 week ago

Why? It works as a corollary - there's no logic involved in any of the stages described.

[–] [email protected] 1 points 1 week ago (1 children)

I've always believed the adage that you can't logic someone out of a position they didn't logic themselves into. It protects my peace.

[–] Angry_Autist 1 points 5 days ago

logic isn't the only way to persuade, in fact all evidence seems to show it works on very few people.

Everyone discounts sincere emotional arguments but frankly that's all I've ever seen work on conspiracyheads.

[–] [email protected] 29 points 1 week ago

The researchers think a deep understanding of a given theory is vital to tackling errant beliefs. "Canned" debunking attempts, they argue, are too broad to address "the specific evidence accepted by the believer," which means they often fail. Because large language models like GPT-4 Turbo can quickly reference web-based material related to a particular belief or piece of "evidence," they mimic an expert in that specific belief; in short, they become a more effective conversation partner and debunker than can be found at your Thanksgiving dinner table or heated Discord chat with friends.

This is great news. The emotional labor needed to talk these people down is emotionally and mentally damaging. Offloading it to software is a great use of the technology that has real value.

[–] Gradually_Adjusting 22 points 1 week ago* (last edited 1 week ago) (2 children)

Let me guess, the good news is that conspiracism can be cured but the bad news is that LLMs are able to shape human beliefs. I'll go read now and edit if I was pleasantly incorrect.

Edit: They didn't test the model's ability to inculcate new conspiracies, obviously that'd be a fun day at the office for the ethics review board. But I bet with a malign LLM it's very possible.

[–] Angry_Autist 3 points 5 days ago (1 children)

LLMs are able to shape human beliefs.

FUCKING THANK YOU!

I have been trying to get people to understand that the danger of AI isn't some deviantart pngtuber not getting royalties for their Darkererer Sanic OC, but the fact that AI can appear like any other person on the internet, can engage from multiple accounts, and has access to their near entire web history and can make 20 believable scenarios absolutely catered to every weakness in that person's psychology.

I'm glad your post is getting at least some traffic, but even then it's not gonna be enough.

The people that understand the danger have no power to stop it, the people with the power to stop it are profiting off of it and won't stop unless pressured.

And we can't pressure them if we are arguing art rights and shitposting constantly.

[–] Gradually_Adjusting 3 points 5 days ago (1 children)

We need to make it simpler and connect the dots. Like, what's the worst that could happen when billionaires have exclusive control over a for-profit infinite gaslighting machine? This needs to be spelled out.

[–] Angry_Autist 1 points 5 days ago

I'm writing a short horror story that will at least illustrate what I see is the problem. That's a form that can be easier to digest

[–] davidgro 19 points 1 week ago (1 children)

A piece of paper dropped on the ground can 'shape human beliefs'. That's literally a tool used in warfare.

The news here is that conspiratorial thinking can be relieved at all.

[–] Xeroxchasechase 1 points 1 week ago (1 children)

"AI is just a tool; is a bit naïve. The power of this tool and the scope makes this tool a devastating potential. It's a good idea to be concerned and talk about it.

[–] davidgro 7 points 1 week ago (1 children)

Agreed - but acting surprised that it can change opinions (for the worse) doesn't make sense to me, that's obvious, since anything can. That AI can potentially do so even more effectively than other things is indeed worth talking about as a society (and is again pretty obvious)

[–] Gradually_Adjusting 3 points 1 week ago

I wasn't trying to downplay. If it can be wielded thoughtfully at scale, it could be life changing for literally millions.

The risk is that billionaires own these models, and far too often we see their interests aligned with fascism. If they choose to place a motive in this box, they now know it will have a quantifiable effect.

[–] [email protected] 17 points 1 week ago

More like LLMs are just another type of propaganda. The only thing that can effectively retool conspiracy thinkers is a better education with a focus on developing critical thinking skills.

[–] Sanctus 13 points 1 week ago

All of this can be mitigated much more by ensuring each citizen has a decent education by modern standards. Turns out most of our problems can be fixed by helping each other.

[–] jaggedrobotpubes 11 points 1 week ago

At first glance the major takeaway here might be that AI can do gish-gallop but with the truth instead of lies.

And it doesn't get exhausted with somebody's bad faith bullshit.

[–] [email protected] 6 points 1 week ago (1 children)

"Great! Billy doesn't believe 9/11 was an inside job, but now the AI made him believe Bush was actually president in 1942 and that Obama was never president."

In all seriousness I think an "unbiased" AI might be one of the few ways to reach people about this stuff because any Joe schmoe is just viewed as "believing what they want you to believe!" when they try to confront any conspiracy.

[–] [email protected] 5 points 1 week ago (1 children)

With the inherent biases present in any LLM training model, the issue of hallucinations that you've brought up, alongside the cost of running an LLM at scale being prohibitive to anyone besides private-state partnerships, do you think that will allay conspiracists' valid concerns about the centralization of information access, a la the reduction in quality google search results over the past decade and a half?

[–] [email protected] 3 points 1 week ago

I think those people might not, but I was once a "conspiracy nut," had a circle of friends who were as well, and know that for a lot of those kinds of people YouTube is the majority of the "research" they do. For those people I think this could work as long as it's not hallucinating and can point to proper sources.

[–] paddirn 3 points 1 week ago

That's just what the machines want you to believe.