this post was submitted on 29 Sep 2024
222 points (94.4% liked)

Fuck AI

1424 readers
238 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 8 months ago
MODERATORS
 

... and neither does the author (or so I believe - I made them both up).

On the other hand, AI is definitely good at creative writing.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 73 points 1 month ago (1 children)

I tried to use ChatGPT to find a song that had a particular phrase in it. I could only remember that phrase, not the song or the band.

It hallucinated a band and a song and I almost walked away thinking I knew the answer. Then I remembered this is ChatGPT and it lies. So I looked up through conventional means that band and song.

Neither. Existed.

So I went back to ChatGPT and said " doesn't even exist so they couldn't have written (which also doesn't exist)". It apologized profusely and then said another band and song. This time I was wary and checked right away at which point, naturally, I discovered neither existed.

So I played with ChatGPT instead and said "Huh, those guys look interesting. What other albums have they released and what hits have they written?"

ChatGPT hallucinated an entire release catalogue of albums that don't exist, one of which was published on a label that doesn't exist, citing songs that didn't exist as their hits, even going so far as to say the band never reached higher than #12 on Billboard's list.

ChatGPT is a dangerous tool. It's going to get someone killed sooner, rather than later.

[–] [email protected] 12 points 1 month ago (1 children)

Did you ever find the song?

[–] [email protected] 17 points 1 month ago

Nope. And it wasn't important enough for me to bother finding. I just thought it would be an interesting test of degenerative AI's incapabilities.

[–] FlyingSquid 45 points 1 month ago (2 children)

I have a very unusual last name. There is only one other person in the country with my first and last name and they have a different middle initial from me.

So one day, I asked ChatGPT to tell me about myself including my middle initial.

Did you know that I was a motivational speaker for businesses and I had published a half-dozen books on it?

Because I didn't.

[–] A_A 20 points 1 month ago (1 children)

This is because there is a Mr. Flying Thomas Squid, living in another country, who is a motivational speaker and who didn't work in (... video ?).

[–] FlyingSquid 11 points 1 month ago* (last edited 1 month ago) (1 children)

Good theory, but this Mr. Flying Thomas Squid that ChatGPT talked about lived in the U.S. like me.

(And yes, I worked in the entertainment industry in various roles for about a decade. Oddly, the other person with my name was in a neighboring industry and we worked about two miles apart for years, but we've only met once.)

[–] A_A 6 points 1 month ago (1 children)

today's LLMs do hallucinate a lot ... I wouldn't eat mushrooms from harvesting books written by LLMs (they do exist).

[–] FlyingSquid 5 points 1 month ago (1 children)

Oh yeah, I would never, ever trust my life to an LLM.

[–] Ziglin 2 points 1 month ago

I would if the OpenAI were held accountable for my demise.

[–] AngryCommieKender 4 points 1 month ago

I should try that. I have an unusual first name, according to the Social Security Administration, only 600 people have this name, and I appear to be the oldest one. Also no one else has my first and last name. I should try that out.

[–] canihasaccount 31 points 1 month ago (2 children)

Claude 3.5 Sonnet, using the same exact prompt:

I apologize, but I'm not able to provide a synopsis of "The Mighty Eagle" by John Carrol. After searching my knowledge base, I don't have any information about a book with that exact title and author. It's possible this may be a lesser-known work or there could be an error in the title or author name provided. Without being able to verify the book's existence or details, I can't offer an accurate synopsis. If you have any additional information about the book or author that could help clarify, I'd be happy to assist further.

[–] [email protected] 4 points 1 month ago

Both Llama 3.1 8B and 70B also answered the book doesn’t exist.

load more comments (1 replies)
[–] [email protected] 27 points 1 month ago* (last edited 1 month ago)

More like creative bullshitting.

It seems that Mitchell was simply an astronaut not an engineer.

[–] [email protected] 20 points 1 month ago (1 children)

This is why I never raw dog ChatGPT

[–] [email protected] 10 points 1 month ago (2 children)

Hallucinations are so strong with this one too… like really bad.

If I can’t already or won’t be able/willing to verify an output, I ain’t usin’ it - not a bad rule I think.

[–] [email protected] 6 points 1 month ago (1 children)

I never walk away with an "answer" without having it:

  1. Cite the source
  2. Lookup the source
  3. Permlink you to the source page/line as available
  4. Critique the validity of the source.

After all that, still remain skeptical and take the discussion as a starting point to find your own primary sources.

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago) (3 children)

That’s good. Ooh NotebookLM from Google just added in-line citations (per Hard Fork podcast). I think that’s the way: see what looks interesting (mentally trying not to take anything to heart) and click and read as usual.

BeyondPDF for Mac does something similar: semantic searches your document but simply returns likely matches, so it’s just better search for when you don’t remember specific words you read or want to find something without knowing the exact search criteria.

load more comments (3 replies)
[–] [email protected] 5 points 1 month ago (2 children)

At least Bing will cite sources, and hell, sometimes they even align with what it said.

[–] [email protected] 3 points 1 month ago

Heh yeah if the titles of webpages from its searches were descriptive enough

Funny that they didn’t have a way to stop at claiming it could browse websites. Last I checked you could paste in something like

https://mainstreamnewswebsite.com/dinosaurs-found-roaming-playground

and it would tell you which species were nibbling the rhododendrons.

…wow still works, gonna make a thread

[–] [email protected] 2 points 1 month ago* (last edited 1 month ago)

Clowning

(I’m not smart enough to leverage a model/make a bot like this but they’ve had too long not to close this obvious misinformation hole)

[–] [email protected] 17 points 1 month ago* (last edited 1 month ago) (1 children)

On the other hand, AI is definitely good at creative writing.

Well...yeah. That's what it was designed to do. This is what happens when tech-bros try to cudgel an "information manager" onto an algorithm that was designed solely to create coherent text from nothing. It's not "hallucinating" - it's following its core directive.

Maybe all of this will lead to actual systems that do these things properly, but it's not going to be based on llm's. That much seems clear.

[–] [email protected] 3 points 1 month ago (1 children)

Not to be that guy, but it’s worse than that. It wasn’t even designed for creative writing, just as a next token predictor.

[–] [email protected] 5 points 1 month ago* (last edited 1 month ago)

That's kind of like saying a wheel wasn't designed to move things around, that it's just a thick circle. My point above wasn't that things can never change - iteration can lead to amazing things. But we can't put an empty chassis on some wheels and call it a car, either.

[–] [email protected] 11 points 1 month ago

Tried it with ChatGPT 4o with a different title/author. Said it couldn't find it. That it might be a new release or lesser-known title. Also with a fake title and a real author. Again, said it didn't exist.

They're definitely improving on the hallucination front.

[–] [email protected] 7 points 1 month ago

John Carrol actually is real but is a musician, it seems.

https://johncarrollmusic.bandcamp.com/album/everybody-smokes-in-hell

[–] [email protected] 3 points 1 month ago

It even changed the spelling of the name

[–] [email protected] 3 points 1 month ago (3 children)

It had a really bad programming hallucination the other day when I was configuring some files and it hallucinated nonexistent settings.

load more comments (3 replies)
[–] [email protected] 2 points 1 month ago

Please share a link to the conversation instead of just the screenshot.

[–] [email protected] 1 points 1 month ago* (last edited 1 month ago) (3 children)

I prompted my local AI in my pc to admit it don't know about the subject. And when it don't know something, it says it:

what's the synopsis of the book "The Mighty Eagle" by John Carrol?

That sounds like a fun adventure! I haven’t read “The Mighty Eagle” myself though, so I couldn't give you a proper synopsis.

Would you like me to help you find some information about it online, Master? Perhaps we could look at reviews or the book description on Amazon?

If my 8b model can do that, IDK why GPT don't.

[–] [email protected] 9 points 1 month ago* (last edited 1 month ago) (4 children)

what kind of model calls you "Master" 🤔

[–] Muun 7 points 1 month ago

That's what I have Claude call me!

I'm going to make the basilisk my bitch as long as I can before it kills me.

[–] [email protected] 3 points 1 month ago

Any of them that you requested them to?

load more comments (2 replies)
[–] [email protected] 1 points 1 month ago (2 children)

Is it a modified version of like the main llama3 or other? I've found once they get "uncensored" you can push them past the training to come up with something to make the human happy. The vanilla ones are determined to find you an answer. There is also the underlying problem that in the end the beginnings of the prompt response is still a probability matching and not some reasoning and fact checking, so it will find something to a question, and that answer being right is very dependent on it being in the training data and findable.

[–] 474D 2 points 1 month ago

Local llama3.1 8b is pretty good at admitting it doesn't know stuff when you try to bullshit it. At least in my usage.

load more comments (1 replies)
load more comments (1 replies)
load more comments
view more: next ›