this post was submitted on 03 Nov 2024
1274 points (99.4% liked)
Fuck AI
1449 readers
314 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 8 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Respectfully, this is victim blaming. Criticize Google, not end users.
Wait, are you advocating people blindly trust unreliable sources and then get angry at the unreliable source when it turns out to be unreliable rather than learn from shit like this to avoid becoming a victim?
Google has spent a fortune to convince people they are a reliable source. This is clearly on google, not the people who aren't tech savvy.
Ok, I agree that Google isn't a good guy in this situation, but that doesn't mean advice to not just trust what Google says is invalid. It also doesn't absolve Google of their accidental or deliberate inaccuracies.
It was just a "In case you didn't know, don't just trust Google even though they've worked so hard at building a reputation of being trustworthy and even seemed pretty trustworthy in the past. Get a phone number from the company's website."
And then I'll add on: regardless of where you got the phone number from, be skeptical if someone asks you for your banking information or other personal information that isn't usually involved in such a service. Not because you'll be the bad guy if you do get scammed, but to avoid going through it because it's at least going to be a pain in the ass to deal with, if not a financially horrible situation to go through if you are unable to get it reversed.
Where did I say this? I didn't say this. You said I said this.
I don't see any blaming of anyone in the original comment you replied to but just general advice to avoid falling for a scam like this. There isn't even a victim in this case because the asking for banking info tipped them off if I'm understanding the OP correctly.
So I'm confused about what specifically you are objecting to in the original comment and if it is the general idea that you shouldn't blindly trust results given by Google's LLM, which isn't known for its reliability.
For me it's the idea of focusing at all on telling people not to trust LLMs as opposed to criticizing companies for putting them prominently on the top of the page.
Why not both? Plus, not just trusting LLMs is something any of us can decide to do on our own.
Because the average person doesn't even know what an LLM is or what it even stands for and putting a misinformation generator at the top of search pages is irresponsible.
Like, if something is so unreliable with information that you have to say "don't trust what this thing says" but you still put it at the top of the page? Come on... It's like putting a self destruct button in a car and telling people "well the label says not to push it!"
We don't control what Google puts on their search page. Ideally, yeah, they wouldn't be pushing their LLM out to where it's the first thing to respond to people who don't understand that it isn't reliable. But we live in a reality where they did put it on top of their search page and where they likely don't even care what we think of that. Their interests and everyone else's don't necessarily align.
That comment was advice for people who read it and haven't yet realized how unreliable it is and has nothing to do with the average person. I'm still confused as to why you have such an issue with it being said at all. Based on what you've been saying, I think you'd agree that Google is being either negligent or malicious by doing so. So saying they shouldn't be trusted seems like common sense, but your first comment acts like it's just being mean to anyone who has trusted it or something?
Remember when 4chan got people to microwave their phones because they got them to believe it would charge it?
If calling those people stupid is victim blaming then so be it. I’m blaming the victim.
This case isn’t as clear as that but even before the AI mania the instant answer at the top of Google results was frequently incorrect. Being able to discern BS from real results has always been necessary and AI doesn’t change that.
I’ve been using Kagi this year and it keeps LLM results out of the way unless I want them. When you open their AI assistant it says
I think that sums it up nicely.