this post was submitted on 02 Jul 2024
36 points (100.0% liked)

TechTakes

1441 readers
73 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

with early "grieftech" entepreneur Helena Blavatsky

top 16 comments
sorted by: hot top controversial new old
[–] barsquid 14 points 5 months ago (1 children)

Neat little vignette about a vile sociopath scamming people in mourning. And, of course, zero consideration for the grieving people being scammed. "ChatGPT please help me feel like he's still here," "actually he is already in hell."

[–] [email protected] 13 points 5 months ago (1 children)

"grieftech". Fucking "grief tech".

[–] [email protected] 11 points 5 months ago (1 children)
[–] [email protected] 7 points 5 months ago
[–] [email protected] 13 points 5 months ago (1 children)

Addressing the “in hell” response that made headlines at Sundance, Rohrer said the statement came after 85 back-and-forth exchanges in which Angel and the AI discussed long hours working in the “treatment center,” working with “mostly addicts.”

We know 85 is the upper bound, but I wonder what Rohrer would consider the minimum number of "exchanges" acceptable for telling someone their loved one is in hell? Like, is 20 in "Hey, not cool" territory, but it's all good once you get to 50? 40?

Rohrer says that when Angel asked if Cameroun was working or haunting the treatment center in heaven, the AI responded, “Nope, in hell.”

“They had already fully established that he wasn't in heaven,” Rohrer said.

Always a good sign when your best defense of the horrible thing your chatbot says is that it's in context.

[–] [email protected] 16 points 5 months ago

it’s very telling that 85 messages is considered a lot. your grief better resolve quick before the model loses coherency and starts digging quotes out of a plagiarized horror movie script

fuck it’s gross how one of the common use cases for LLMs is targeting vulnerable people with the hope they’ll develop a parasocial relationship with your service, so you can keep charging them forever

[–] [email protected] 13 points 5 months ago* (last edited 5 months ago) (1 children)

Jason Rohrer saw a black guy once and it provoked him to write a video game about the castle doctrine

our readers bring us cursèd knowledge like a dead plague rat dropped at our feet

[–] [email protected] 8 points 5 months ago* (last edited 5 months ago) (1 children)

oh fuck he's that asshole? the one that was so petty about a negative Polygon review for said game that he stalked the reviewer's Twitter page until he could find a quote to mangle into a recommendation to put on the game's Steam page? including the reviewer's full name, against the reviewer's repeated, explicit wishes for it to be removed?

[–] [email protected] 6 points 5 months ago* (last edited 5 months ago)

I suddenly feel a lot less bad about him having his game copied and re-sold because he released it under public domain. Maybe the 'left' in copyleft scared him

Hateful and stupid 🤝

[–] [email protected] 9 points 5 months ago* (last edited 5 months ago)

All my homies hate Helena Blavatsky. Her grifty bullshit has caused so much human misery.

This Rohrer character is a ~~worthy~~ successor.

[–] [email protected] 7 points 5 months ago

Obviously they should have trained the AI on text from John Edwards' Crossing Over. /s

But seriously, I worry when one of these hacks figures out how to make the word multipliers approximate a cold reading.

[–] iAvicenna 6 points 5 months ago

oh for fucks sake, will these people never cease to exist? what a cancerous bunch.

[–] [email protected] 6 points 5 months ago (1 children)

TBF what's the difference between a "AI" seance and a "real" one? It's not like one is less of a grift than the other.

[–] [email protected] 4 points 5 months ago (1 children)
[–] [email protected] 3 points 5 months ago (1 children)

That's the problem.

Maybe I'm just reading the room wrong, but the consensus of the comment section seems to be "haha he thinks AI can replace psychics" but all psychics do is take what people said, reframe it and maybe add some nonsense after that can sound vaguely correct. For the most part, that is LLMs. LLMs could replace the psychic industry because unlike other professions there's zero obligation to be correct about anything.

Is it unethical? Absolutely. But psychics are already frauds so it's not like a legitimate profession is being replaced.

Of course, I could just be misreading the room.

[–] [email protected] 3 points 5 months ago

it's like how crypto is new skins on old frauds