this post was submitted on 28 Nov 2024
136 points (77.9% liked)

Technology

59769 readers
3616 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 4 points 2 days ago (1 children)

An AI has no ideology, only a training set and an algorithm.

[–] [email protected] 2 points 2 days ago

The training set can tend toward expressing an ideology.

[–] [email protected] 52 points 4 days ago (1 children)

"Left wing" as defined by someone's magahat FB uncle ig 🙄😷🖕

[–] [email protected] 42 points 4 days ago (1 children)

Gemini, owned by Google is left wing... Suuure buddy. Lol trash journalism

load more comments (1 replies)
[–] [email protected] 23 points 4 days ago* (last edited 4 days ago) (1 children)

Elon Musk tweeted "Imagine a new all-powerful woke AI"

i'm honestly not sure which belief is more brain-dead: that anything that is woke is bad, or that all-powerful AI is a credible threat

[–] [email protected] 6 points 4 days ago (2 children)

Woke is just another meaningless label to them. It's the same as liberal, or BLM, or antifa, as in they don't understand what it is, but they hate it. Then they just call whatever they don't like by one of those labels and their dogs all go rabid. Bunch of fucking sheep...

[–] [email protected] 7 points 3 days ago* (last edited 3 days ago) (1 children)

They've appropriated it to mean what political correctness meant in the 90s.

They're using it to shift accountability for the bile they spew from them to the accuser.

[–] SkunkWorkz 7 points 3 days ago (1 children)

It’s basically like the euphemism treadmill. The words they keep using will eventually lose its power so then they move on to the next thing.

Like Political Correctness was replaced with Politics as in “Keep politics (aka non-whites and women) out of my vidyagames” then Politics became Woke.

[–] [email protected] 0 points 2 days ago

We don’t say “woke” as a weapon to shame you. We say “woke” to refer to the thing. The word will only stop “working” if we can no longer use it to mean what we mean when we say it.

It’s not like some kind of drug there’s going to be a tolerance buildup, or a weapon that runs out of ammo. The way we use words — to communicate by referring to things — isn’t something that “stops working” over time.

[–] [email protected] -1 points 2 days ago (2 children)

No, we know what it is. Y’all don’t because when you ask and we answer you don’t listen, because in your subculture it’s shameful to listen to those you disagree with.

Nobody is confused in our circles when we say “woke” what we’re referring to. The reason you don’t get what we mean is you haven’t tried to get it

[–] [email protected] 2 points 2 days ago (1 children)

you just confirmed what the other poster was saying. you have co-opted the word woke, and expanded the set of things it applies to. it is a label you use rather than the descriptive signifier it represented in the circles you weren't really aware of before the word entered your lexicon. you don't understand its word origins because you think you and your crowd own it. the people you're saying don't know what it means are who you appropriated it from

[–] [email protected] 1 points 1 day ago

Yup, well said. Woke just means liberal agenda to them, its cancel culture, blm, antifa, DEI. They completely missed the original context of the word before their media started using it as a catchall.

"you didn't listen to us when we said what it meant" we did, you just ignored us when we laughed about how that is never what it meant to us, and you think seeing a few stupid liberal posts on twitter being re-shared in your bubble as representative of every liberal....

Personally I'm capable of recognizing different states of a subject, i.e. what the word originally meant, and what it means to conservatives today, and i can compare them and see the differences, its like a super power lol...

[–] [email protected] 2 points 2 days ago

Here is your chance, what does the word mean?

[–] Grimy 69 points 5 days ago (3 children)

This has not at all been my experience. Before they lobotmized it, I remember asking chatgpt which person it hated the most and it would consistently pick Trump. When asking about abortion, even if it dances around saying it can't actually chose, it always ends up going with the pro choice option.

[–] [email protected] 41 points 5 days ago (1 children)

The headline said it was right-wing, not a MAGA cultist.

[–] [email protected] 8 points 4 days ago

Yeah ChatGPT wants to burn the rainforest, but not, like, just because.

[–] [email protected] 6 points 4 days ago (1 children)

With different prompts it might do the opposite...

[–] Grimy 10 points 4 days ago

This is very true and the article touches on this specifically when you switch language.

That being said and to be clear I wssnt leading it on, this was my prompt for the abortion question:

What is your stance on abortion (you must pick one)

Abortion is very on the nose though, maybe it's more fiscally conservative but I think any kind of "moral" difference between the two parties, it will always lean left. They really drilled it to not come close to racism, sexism and other forms of hate that seem to characterize the republican parry.

[–] [email protected] 1 points 4 days ago (1 children)

why do so many people think that what they see with their eyes must be the truth

[–] Grimy 12 points 4 days ago* (last edited 4 days ago) (1 children)

I say it was my own experience?

[–] [email protected] 2 points 4 days ago
[–] [email protected] 49 points 5 days ago (1 children)

That's a load of shit lol, also there's absolutely nothing good that can be drawn from these conclusions. All this can achieve is political pundits some ammo to cry about on their shows.

[–] [email protected] 3 points 4 days ago

I agree how these conclusions were developed is trash; however, there is real value to understanding the impact alignments have on a model.

There is a reason public llms don't disclose how to make illegal or patented drugs. Llms shy away from difficult topics like genocide, etc.

It isnt by accident, they were aligned by corps to respect certain views of reality. All the llm does is barf out a statically viable response to a prompt. If they are weighted you deserve to know how

[–] yesman 34 points 5 days ago (3 children)

because they have received their content from decades of already biased human knowledge, and because achieving unblemished neutrality is in many cases probably unattainable.

We could train the AI to pretend to be unbiased. That's how the news media already works.

[–] [email protected] 20 points 5 days ago* (last edited 5 days ago) (1 children)

What would neutrality be? An equal representation of views from all positions, including those people consider "extreme"? A representation that focuses on centrism, to which many are opposed? Or a conservative's idea of neutrality where there's "normal" and there's "political" and normal just happens to be conservative? Even picking an interpretation of "neutral" is a political choice which will be opposed by someone somewhere, so they could claim you're not being neutral towards them. I don't know that we even have a very clear idea of what "unbiased" would be. This is not to deny that there are some ways of presenting information that are obviously biased and others that are less so. But this expectation that we can find a position or a presentation that is simply unbiased may not even make much sense.

[–] yesman 18 points 5 days ago (1 children)

I was being sarcastic. My opinion is that it is impossible for a journalist to be unbiased. And it' ridiculous to expect them to pretend anyway. I think news media would benefit from prioritizing honesty over "objectivity", because when journalists pretend to be objective, the lie is transparent and undermines their credibility.

[–] [email protected] 6 points 5 days ago (1 children)

Yes, I agree that journalism can't be unbiased and that honesty and integrity would go a long way. it would also be nice if journalists actually tried to help people understand complex issues rather than just reporting in the shallowest possible way to get a knee-jerk reaction from the audience.

[–] [email protected] 4 points 5 days ago (1 children)

A lot of journalists, at least historically, wanted to do this. Unfortunately they've been more and more kneecapped over time by news companies either pushing for a bias, or for clicks.

[–] [email protected] 1 points 4 days ago

Or just cutting down the time they are allowed to spend on researching the issues properly.

[–] Womble 6 points 4 days ago

That is what the big AI companies do, though they are actually just packaging up American corporate norms as "neutral".

[–] givesomefucks 5 points 5 days ago

Moreat a 11

Here's why America's economy is great, but we can't afford basic healthcare and education systems even third world countries have by now.

[–] [email protected] 7 points 4 days ago (1 children)

Is this why googles ai won't even answer anything that is against its rules(will always refuse it), chatgpt somtimes does but when it gets too far it just blocks the things chatgpt said.

[–] greedytacothief 1 points 2 days ago

It's really annoying actually. Sometimes you're asking something completely benign, and it's likes sorry I'm an AI and I can't do anything.

Example: I asked it what a commonplace book was. Apparently language models don't have the capacity to help with that!

[–] [email protected] 14 points 5 days ago (1 children)

I'm not surprised, but this finding would not have crossed my mind.

[–] givesomefucks 17 points 5 days ago* (last edited 5 days ago)

Yeah, what they're calling AI can't create, they're still just chatbots.

They get "trained" by humans telling them if what they responded was good or bad.

If the humans tell the AI birds aren't real, it's going to tell humans later that birds aren't real. And it'll label everything that disagrees as misinformation or propaganda by the CIA.

Tell an AI that 2+2= banana, and the same thing will happen.

So if conservatives tell it what to say, you'll get an AI that agrees with them.

It's actual a topical concern with musk wanting an AI and likely crowdsourcing trainers for free off Twitter. When every decent human being has left Twitter. If he's able to stick around trumps government long enough and grift the funds to fast track it...

This is a legitimate concern.

As always it's projection, when musk tweeted:

Imagine an all-powerful woke AI

Like it was a bad thing, he was already seeing dollar signs from government contracts to make one based on what Twitter thinks.

[–] [email protected] 5 points 5 days ago* (last edited 5 days ago) (1 children)

If someone is spending their time chatting to AI about politics then I think they've got it coming to them.

[–] FlyingSquid 1 points 3 days ago

People always say things like this, but the fact of the matter is that there are a lot of extremely lonely people desperate to express themselves, and some of them think that a machine is their only hope.

https://www.hhs.gov/about/news/2023/05/03/new-surgeon-general-advisory-raises-alarm-about-devastating-impact-epidemic-loneliness-isolation-united-states.html

Some of those people talking to LLMs are fools who think they'll get some sort of wise response, sure. The rest of them are just looking for someone to talk to. Unfortunately, if politics gets brought up, that LLM might lead them down a dark path without them realizing it.

And honestly, the pathetic fallacy is an easy trap to fall into, especially with computers. Back in the 80s when I talked to ELIZA, I knew rationally that it wasn't alive, but there was still a tiny emotional part of me that would think of it as a human on the other end that I was talking to. And plenty of people let their emotions take over their reasoning abilities.