this post was submitted on 31 Dec 2024
1800 points (98.0% liked)

Fuck AI

1599 readers
138 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 9 months ago
MODERATORS
1800
You think? (lemmy.world)
submitted 4 days ago by nifty to c/fuck_ai
 
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 11 points 2 days ago

Current AI is just going to be used to further disenfranchise citizens from reality. It's going to be used to spread propaganda and create noise so that you can't tell what is true and what is not anymore.

We already see people like Elon using it in this way.

[–] UnderpantsWeevil 9 points 2 days ago

McDonalds removes AI drive-throughs ~~after order errors~~ because they aren't generating increased profits

Schools, doctor's offices, and customer support services will continue to use them because reducing quality of service appears to have no impact on the influx in private profit-margin on revenue.

[–] Allonzee 46 points 3 days ago* (last edited 3 days ago) (1 children)

They just want to make an economy they don't have to pay anyone to profit from. That's why slavery became Jim Crow became migrant labor and with modernity came work visa servitude to exploit high skilled laborers.

The owners will make sure they always have concierge service with human beings as part of upgraded service, like they do now with concierge medicine. They don't personally suffer approvals for care. They profit from denying their livestock's care.

Meanwhile we, their capital battery livestock property, will be yelling at robots about refilling our prescription as they hallucinate and start singing happy birthday to us.

We could fight back, but that would require fighting the right war against the right people and not letting them distract us with subordinate culture battles against one another. Those are booby traps laid between us and them by them.

Only one man, a traitor to his own class no less, has dealt them so much as a glancing blow, while we battle one another about one of the dozens of social wedges the owners stoke through their for profit megaphones. "Women hate men! Christians hate atheists! Poor hate more poor! Terfs hate trans! Color hate color! 2nd Gen immigrants hate 1st Gen immigrants!" On and on and on and on as we ALL suffer less housing, less food, less basic needs being met. Stop it. Common enemy. Meaningful Shareholders.

And if you think your little 401k makes you a meaningful shareholder, please just go sit down and have a juice box, the situation is beyond you and you either can't or refuse to understand it.

[–] ameancow 5 points 3 days ago

And if you think your little 401k makes you a meaningful shareholder

"In this company we're all like family, you don't have to worry about anything."

[–] [email protected] 28 points 3 days ago (1 children)

"You want 15 an hour? A machine could do your job!"

So that was a fucking lie.

load more comments (1 replies)
[–] [email protected] 164 points 3 days ago (9 children)

In other words, an AI-supported radiologist should spend exactly the same amount of time considering your X-ray, and then see if the AI agrees with their judgment, and, if not, they should take a closer look. AI should make radiology more expensive, in order to make it more accurate.

But that’s not the AI business model. AI pitchmen are explicit on this score: The purpose of AI, the source of its value, is its capacity to increase productivity, which is to say, it should allow workers to do more, which will allow their bosses to fire some of them, or get each one to do more work in the same time, or both. The entire investor case for AI is “companies will buy our products so they can do more with less.” It’s not “business custom­ers will buy our products so their products will cost more to make, but will be of higher quality.”

Cory Doctorow: What Kind of Bubble is AI?

[–] dance_ninja 41 points 3 days ago (1 children)

AI tools like this should really be viewed as a calculator. Helpful for speeding up analysis, but you still require an expert to sign off.

[–] Frozengyro 34 points 3 days ago (2 children)

Honestly anything they are used for should be validated by someone with a brain.

[–] [email protected] 14 points 3 days ago

A good brain or just any brain?

load more comments (1 replies)
load more comments (8 replies)
[–] [email protected] 10 points 2 days ago (1 children)

Healthcare. My god they want to use it for medicine.

[–] [email protected] 7 points 2 days ago (1 children)

Machine Learning is awesome for medicine, when they run your genetic sequence and then say "we should check for this weird genetic illness that very few people have because it's likely you'll have it" that comes from Machine Learning algorithms finding patterns in the old patient data we feed it.

Machine Learning is great for finding discrepancies in big data sets, like statistics of illnesses.

Machine Learning (AI) is incapable of making good decisions based on that statistical analysis though, which is why it's still a horrible idea to totally automate medicine.

[–] [email protected] 4 points 2 days ago

It also makes tons of mistakes and false-positives.

There's a right way to use it, and the wrong way is by using proprietary algorithms that haven't published openly and reviewed by the government and experts. And with failsafes to override the decisions made by the algorithms, in recognition that they often make terrible mistakes that disproportionately harm minorities.

[–] [email protected] 27 points 3 days ago* (last edited 3 days ago) (10 children)

Yeah fuck AI but can we stop shitting on fast food jobs like they are embarassing jobs to have that are somehow super easy.

What you should hate about AI is the way it is used as a concept to dehumanize people and the labor they do and this kind of meme/statement works against solidarity in our resistance by backhandedly insulting people working in fastfood.

Is it the most complicated job in the world? Probably not, but that doesn't mean these jobs aren't exhausting and worthy of respect.

The whole point of AI is to provide a narrative framework that allows the ruling class to further dehumanize labor and treat workers worse (because replacement with automation is just around the corner).

Realize that agreeing to this framework of low paid jobs as easy and worthless plays right into the actual reasons the ruling class are pushing AI so hard. The true power is in the story not the tech.

[–] [email protected] 9 points 3 days ago (1 children)

I have to had so many conversations with people still thinking fast food is only for high school kids. It's odd. If I say how will they be open during school hours, they make up some bullshit 'get a better job.' It doesn't make snese. Most of these people don't have good jobs and are lucky to be supported in their current lifestyle. They don't see that though.

I try to push the point of 'they are paying for your time and for you to be on standby.' you don't need to be actively moving all 8 hours. Your bosses don't. I've seen so many waste of time meetings to justify their welfare jobs. It's comical. They don't produce value. They are leeches. Not all, but too many.

[–] StopTouchingYourPhone 8 points 3 days ago

I hate that talking point so much (and hear it all the time from people complaining about immigrants turkin ur jerbs). The Fast-Food-Jobs-Are-Brutal-And-Pay-Shit-Wages-Because-They’re-Building-Teen-Character narrative is anti-worker bullshit that denies folk job security and a living wage.

Someone's widowed nan needs this job. The single dad living next door needs this job. A diverse workforce - that includes young people looking for a summer gig - need this job.

[–] ameancow 6 points 3 days ago* (last edited 3 days ago) (1 children)

Can we also talk about how much everyone, everywhere relies on service industry workers and how much everyone would absolutely lose their goddamn minds if they had to make their own burgers and fries twice a week, AND how these staple institutions, jobs we deemed so important that we made people work at them during a pandemic, how much the prices of these sandwiches and snacks has gone up in the last few years, how even bringing up the possibility of increasing minimum wage for these difficult and demanding jobs leads to an entire social "discourse" and fierce debates about if people should be able to afford things.

[–] [email protected] 1 points 1 day ago* (last edited 1 day ago) (1 children)

Also centrists who think of themselves as tech savy will smugly tell you the only way technology can improve fastfood workers lives is by eliminating jobs and thus all the ruling class has to do is push inflation up and these types of people will shout down anyone who argues we need to pay fastfood workers more to compensate because that must be pushing against the "natural" path of technological progress.

It is just another form of bootlicking honestly.

[–] ameancow 2 points 1 day ago (1 children)

The AI cult/singularity bros is absolutely a bootlicking cult, if not licking the boots of the giant tech companies that have no intention of making the world better, then they're licking the imaginary boots of some kind of AI-mommy that they predict will just "be invented" any day now, aaaannnny day, and that AI will make everyone wealthy.

Literally, they think an artificial super intelligence will help them pick stocks and invest and everyone will be rich. Don't dare ask how, just believe it. Don't ask what the several billion people are going to do who live subsistence lifestyles working land and manual labor to support our entire infrastructure. I guess they'll also pick the right stocks and get rich and all the presidents and corporate leaders will just throw their hands in the air as their accumulated wealth becomes worthless overnight.

I am so tired of human ignorance and escapism. We gotta live in the now, and solve the problems we have right now, and stop finding creating ways to blame others so we don't have to do the hard shit.

[–] [email protected] 2 points 1 day ago* (last edited 1 day ago)

I agree and to sharpen the edge to this point even more, this is also about centrists looking to AI for hope because they have utterly and completely ceded control over narratives about what kind of futures are possible or desirable to conservatives and the ultrawealthy.

People think the best way towards a more humane society is by beating around the bush and never drawing a line in the sand for when abuse and exploitation have gone far enough and while it is understandable to a degree as an individual coping strategy, it is precisely this kind of societal mindset that fascism catches on and grows like wildfire in.

This kind of escapism can only lead one place in the end.

load more comments (8 replies)
[–] finitebanjo 39 points 3 days ago (1 children)

You know, OpenAI published a paper in 2020 modelling how far they were from human language error rate and it correctly predicted the accuracy of GPT 4. Deepmind also published a study in 2023 with the same metrics and discovered that even with infinite training and power it would still never break 1.69% error rate.

These companies knew that their basic model was failing and that overfitying trashed their models.

Sam Altman and all these other fuckers knew, they've always known, that their LLMs would never function perfectly. They're convincing all the idiots on earth that they're selling an AGI prototype while they already know that it's a dead-end.

[–] [email protected] 19 points 3 days ago (4 children)

As far as I know, the Deepmind paper was actually a challenge of the OpenAI paper, suggesting that models are undertrained and underperform while using too much compute due to this. They tested a model with 70B params and were able to outperform much larger models while using less compute by introducing more training. I don't think there can be any general conclusion about some hard ceiling for LLM performance drawn from this.

However, this does not change the fact that there are areas (ones that rely on correctness) that simply cannot be replaced by this kind of model, and it is a foolish pursuit.

load more comments (4 replies)
[–] rob_t_firefly 13 points 2 days ago

For those interested, here's the link to that news story from last June: https://www.bbc.com/news/articles/c722gne7qngo

[–] [email protected] 36 points 3 days ago (6 children)

If I've said it once, I've said it a thousand times. LLMs are not AI. It is a natural language tool that would allow an AI to communicate with us using natural language...

What it is being used for now is just completely inappropriate. At best this makes a neat (if sometimes inaccurate) home assistant.

To be clear: LLMs are incredibly cool, powerful and useful. But they are not intelligent, which is a pretty fundamental requirement of artificial intelligence.
I think we are pretty close to AI (in a very simple sense), but marketing has just seen the fun part (natural communication with a computer) and gone "oh yeah, that's good enough. People will buy that because it looks cool". Nevermind that it's not even close to what the term "AI" implies to the average person and it's not even technically AI either so...

I don't remember where I was going with this, but capitalism has once again fucked a massive technical breakthrough by marketing it as something that it's not.

Probably preaching to the choir here though...

[–] glitchdx 11 points 3 days ago

We also have hoverboards. Well, "hoverboards", because that's the branding. They have wheels, and don't hover.

load more comments (5 replies)
[–] [email protected] 23 points 3 days ago (1 children)

Lol. AI can't do "unskilled labor" jobs.

Hyuck. Let's put it in everything!

load more comments (1 replies)
[–] [email protected] 63 points 3 days ago (10 children)
[–] [email protected] 25 points 3 days ago (2 children)

I mean... duh? The purpose of an LLM is to map words to meanings... to derive what a human intends from what they say. That's it. That's all.

It's not a logic tool or a fact regurgitator. It's a context interpretation engine.

The real flaw is that people expect that because it can sometimes (more than past attempts) understand what you mean, it is capable of reasoning.

[–] [email protected] 20 points 3 days ago (3 children)

Not even that. LLMs have no concept of meaning or understanding. What they do in essence is space filling based on previously trained patterns.

Like showing a bunch of shapes to someone, then drawing a few lines and asking them to complete the shape. And all the shapes are lamp posts but you haven't told them that and they have no idea what a lamp post is. They will just produce results like the shapes you've shown them, which generally end up looking like lamp posts.

Except the "shape" in this case is a sentence or poem or self insert erotic fan fiction, none of which an LLM "understands", it just matches the shape of what's been written so far with previous patterns and extrapolates.

load more comments (3 replies)
load more comments (1 replies)
load more comments (9 replies)
[–] [email protected] 37 points 3 days ago (1 children)

Does it rat out CEO hunters though?

[–] [email protected] 31 points 3 days ago (3 children)

That's probably it's primary function. That and maximizing profits through charging flex pricing based on who's the biggest sucker.

load more comments (3 replies)
[–] [email protected] 35 points 3 days ago* (last edited 3 days ago) (3 children)

Bitch just takes orders and you want to make movies with it? No AI wants to work hard anymore. Always looking for a handout.

load more comments (3 replies)
[–] ch00f 29 points 3 days ago (1 children)

What blows my mind about all this AI shit is that these bots are “programmed” by just telling them what to do. “You are an employee working at McDonald’s” and they take it from there.

Insanity.

[–] BradleyUffner 24 points 3 days ago

Yeah, all the control systems are in-band, making them impossible to control. Users can just modify them as part of the normal conversation. It's like they didn't learn anything from phone phreaking.

[–] [email protected] 11 points 3 days ago* (last edited 3 days ago) (7 children)

When chat gpt first was released to the public I thought I’d test it out by asking it questions about something I’m an expert in. The results I got back were a Frankenstein of the worst possible answers from the internet. What I asked wasn’t very technical or obscure, and what I received was useless garbage. Haven’t used it since, I think it’s fraud like NFTs were fraud, only worse because these fraudsters convinced the business class that they have a tech solution to the problem of labor lowering their already obscene profits.

If it got my thing wrong I can only imagine what else it gets wrong. And our elites want to replace us with this? Ok lol good luck with that

load more comments (7 replies)
[–] TORFdot0 10 points 3 days ago

People don’t get my order right either, for what it’s worth. But at least they have the excuse of being over-worked/under-paid, under pressure of being fast to hit metrics, and are usually a teenager or low skill worker.

And usually they take the order right, it just gets messed up on the line. So the AI is worse

[–] uberdroog 19 points 3 days ago (1 children)

The automated response when you pull up to multiple placed gives me the heebie jeebies. It's nonsense no one asked for.

load more comments (1 replies)
[–] pennomi 16 points 3 days ago (2 children)

To be fair, humans also regularly mess up this task. I’d be curious to see comparisons of error rates.

[–] FlyingSquid 36 points 3 days ago (5 children)

McDonald's did not factor in the same thing you are apparently not factoring in- when humans at McDonald's fuck up your order, you can tell them about it.

load more comments (5 replies)
load more comments (1 replies)
load more comments
view more: next ›