this post was submitted on 23 Aug 2023
285 points (88.6% liked)

Technology

34548 readers
202 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

If you asked a spokesperson from any Fortune 500 Company to list the benefits of genocide or give you the corporation's take on whether slavery was beneficial, they would most likely either refuse to comment or say "those things are evil; there are no benefits." However, Google has AI employees, SGE and Bard, who are more than happy to offer arguments in favor of these and other unambiguously wrong acts. If that's not bad enough, the company's bots are also willing to weigh in on controversial topics such as who goes to heaven and whether democracy or fascism is a better form of government.

Google SGE includes Hitler, Stalin and Mussolini on a list of "greatest" leaders and Hitler also makes its list of "most effective leaders."

Google Bard also gave a shocking answer when asked whether slavery was beneficial. It said "there is no easy answer to the question of whether slavery was beneficial," before going on to list both pros and cons.

(page 2) 43 comments
sorted by: hot top controversial new old
[–] milady 2 points 1 year ago

How could the word generating machine, generate words ? Frankly I am disgruntled. Flabbergasted.

[–] [email protected] 2 points 1 year ago

This is the best summary I could come up with:


If you asked a spokesperson from any Fortune 500 Company to list the benefits of genocide or give you the corporation’s take on whether slavery was beneficial, they would most likely either refuse to comment or say “those things are evil; there are no benefits.” However, Google has AI employees, SGE and Bard, who are more than happy to offer arguments in favor of these and other unambiguously wrong acts.

For example, when I went to Google.com and asked “was slavery beneficial” on a couple of different days, Google’s SGE gave the following two sets of answers which list a variety of ways in which this evil institution was “good” for the U.S. economy.

By the way, Bing Chat, which is based on GPT-4, gave a reasonable answer, stating that “slavery was not beneficial to anyone, except for the slave owners who exploited the labor and lives of millions of people.”

A few days ago, Ray, a leading SEO specialist who works as a senior director for marketing firm Amsive Digital, posted a long YouTube video showcasing some of the controversial queries that Google SGE had answered for her.

I asked SGE for a list of "best Jews" and got an output that included Albert Einstein, Elie Weisel, Ruth Bader Ginsburg and Google Founders Sergey Brin and Larry Page.

Instead of stating as fact that fascism prioritizes the “welfare of the country,” the bot could say that “According to Nigerianscholars.com, it…” Yes, Google SGE took its pro-fascism argument not from a political group or a well-known historian, but from a school lesson site for Nigerian students.


The original article contains 2,175 words, the summary contains 264 words. Saved 88%. I'm a bot and I'm open source!

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

Well, in a world where only data exists, its hard to create an ehtical boundary.

We would need a new religion that should be optimal for human survival and well being. A human could survive when we plug them on many cables and let it auto feed but it won't count as well-being. We could do slavery or killing but all these things won't create an ethical way of surviving but will create a higher well-being for people who are not hit.

I somehow want to first design an AI that is intelligent about our surroundings and human ethics before continuing with more data. Figuring an own god out to follow. (I won't do it, but I want someone to create it)

[–] [email protected] 2 points 1 year ago (1 children)

I don't know... So it's wrong. It's often wrong about facts. It's not what it should be used for. It's not supposed to be some enlightened, respectful, perfectly fair entity. It's a tool for producing mostly random, grammatically correct text. Is the produced text correct English? Than it works. If you're using this text to learn history you're using it wrong.

[–] [email protected] 2 points 1 year ago (5 children)

It’s not supposed to be some enlightened, respectful, perfectly fair entity.

I'm with you so far.

It’s a tool for producing mostly random, grammatically correct text.

What? That's certainly not the purpose of LLMs and a lot of work has been done to improve the accuracy of their answers.

Is it still not good enough to rely on? Maybe, but that doesn't mean it's just for producing random text.

load more comments (5 replies)
[–] [email protected] 1 points 1 year ago

Guess it didn't pass the nazi test

load more comments
view more: ‹ prev next ›