this post was submitted on 05 Apr 2024
869 points (96.2% liked)

Technology

59152 readers
2270 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A shocking story was promoted on the "front page" or main feed of Elon Musk's X on Thursday:

"Iran Strikes Tel Aviv with Heavy Missiles," read the headline.

This would certainly be a worrying world news development. Earlier that week, Israel had conducted an airstrike on Iran's embassy in Syria, killing two generals as well as other officers. Retaliation from Iran seemed like a plausible occurrence.

But, there was one major problem: Iran did not attack Israel. The headline was fake.

Even more concerning, the fake headline was apparently generated by X's own official AI chatbot, Grok, and then promoted by X's trending news product, Explore, on the very first day of an updated version of the feature.

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 181 points 7 months ago (2 children)

People who deploy AI should be held responsible for the slander and defamation the AI causes.

[–] TheBat 101 points 7 months ago (10 children)

Slander is spoken. In print, it's libel.

[–] CosmicCleric 36 points 7 months ago* (last edited 7 months ago)
[–] [email protected] 29 points 7 months ago (1 children)

Get me pictures of Spiderman!

[–] [email protected] 14 points 7 months ago

Parker, why does Spider-Man have seven fingers in this photo?!

[–] [email protected] 17 points 7 months ago

Why I imagine Xitter lawyers arguing that was it was neither spoken nor "printed", they can't be charged?

load more comments (7 replies)
load more comments (1 replies)
[–] kadu 154 points 7 months ago* (last edited 7 months ago) (3 children)

I wonder how legislation is going to evolve to handle AI. Brazilian law would punish a newspaper or social media platform claiming that Iran just attacked Israel - this is dangerous information that could affect somebody's life.

If it were up to me, if your AI hallucinated some dangerous information and provided it to users, you're personally responsible. I bet if such a law existed in less than a month all those AI developers would very quickly abandon the "oh no you see it's impossible to completely avoid hallucinations for you see the math is just too complex tee hee" and would actually fix this.

[–] Ottomateeverything 97 points 7 months ago (4 children)

I bet if such a law existed in less than a month all those AI developers would very quickly abandon the "oh no you see it's impossible to completely avoid hallucinations for you see the math is just too complex tee hee" and would actually fix this.

Nah, this problem is actually too hard to solve with LLMs. They don't have any structure or understanding of what they're saying so there's no way to write better guardrails.... Unless you build some other system that tries to make sense of what the LLM says, but that approaches the difficulty of just building an intelligent agent in the first place.

So no, if this law came into effect, people would just stop using AI. It's too cavalier. And imo, they probably should stop for cases like this unless it has direct human oversight of everything coming out of it. Which also, probably just wouldn't happen.

[–] [email protected] 55 points 7 months ago (5 children)

Yep. To add on, this is exactly what all the "AI haters" (myself included) are going on about when they say stuff like there isn't any logic or understanding behind LLMs, or when they say they are stochastic parrots.

LLMs are incredibly good at generating text that works grammatically and reads like it was put together by someone knowledgable and confident, but they have no concept of "truth" or reality. They just have a ton of absurdly complicated technical data about how words/phrases/sentences are related to each other on a structural basis. It's all just really complicated math about how text is put together. It's absolutely amazing, but it is also literally and technologically impossible for that to spontaneously coelesce into reason/logic/sentience.

Turns out that if you get enough of that data together, it makes a very convincing appearance of logic and reason. But it's only an appearance.

You can't duct tape enough speak and spells together to rival the mass of the Sun and have it somehow just become something that outputs a believable human voice.


For an incredibly long time, ChatGPT would fail questions along the lines of "What's heavier, a pound of feathers or three pounds of steel?" because it had seen the normal variation of the riddle with equal weights so many times. It has no concept of one being smaller than three. It just "knows" the pattern of the "correct" response.

It no longer fails that "trick", but there's significant evidence that OpenAI has set up custom handling for that riddle over top of the actual LLM, as it doesn't take much work to find similar ways to trip it up by using slightly modified versions of classic riddles.

A lot of supporters will counter "Well I just ask it to tell the truth, or tell it that it's wrong, and it corrects itself", but I've seen plenty of anecdotes in the opposite direction, with ChatGPT insisting that it's hallucination was fact. It doesn't have any concept of true or false.

[–] neatchee 22 points 7 months ago* (last edited 7 months ago) (1 children)

The shame of it is that despite this limitation LLMs have very real practical uses that, much like cryptocurrencies and NFTs did to blockchain, are being undercut by hucksters.

Tesla has done the same thing with autonomous driving too. They claimed to be something they're not (fanboys don't @ me about semantics) and made the REAL thing less trusted and take even longer to come to market.

Drives me crazy.

[–] FlashMobOfOne 8 points 7 months ago (5 children)

Yup, and I hate that.

I really would like to one day just take road trips everywhere without having to actually drive.

[–] neatchee 4 points 7 months ago* (last edited 7 months ago) (1 children)

Right? Waymo is already several times safer than humans and tesla's garbage, yet municipalities keep refusing them. Trust is a huge problem for them.

And yes, haters, I know that they still have problems in inclement weather but that's kinda the point: we would be much further along if it weren't for the unreasonable hurdles they keep facing because of fear created by Tesla

load more comments (1 replies)
load more comments (4 replies)
[–] cygon 9 points 7 months ago

I love that example. Microsoft's Copilot (based on GTP-4) immediately doesn't disappoint:

Microsoft Copilot: Two pounds of feathers and a pound of lead both weigh the same: two pounds. The difference lies in the material—feathers are much lighter and less dense than lead. However, when it comes to weight, they balance out equally.

It's annoying that for many things, like basic programming tasks, it manages to generate reasonable output that is good enough to goat people into trusting it, yet hallucinates very obviously wrong stuff or follows completely insane approaches on anything off the beaten path. Every other day, I have to spend an hour to justify to a coworker why I wrote code this way when the AI has given him another "great" suggestion, like opening a hidden window with an UI control to query a database instead of going through our ORM.

[–] [email protected] 6 points 7 months ago (2 children)

but it is also literally and technologically impossible for that to spontaneously coelesce into reason/logic/sentience

Yeah, see, one very popular modern religion (without official status or need for one to explicitly identify with id, but really influential) is exactly about "a wonderful invention" spontaneously emerging in the hands of some "genius" who "thinks differently".

Most people put this idea far above reaching your goal after making myriad of small steps, not skipping a single one.

They also want a magic wand.

The fans of "AI" today are deep inside simply luddites. They want some new magic to emerge to destroy the magic they fear.

load more comments (2 replies)
load more comments (2 replies)
[–] kadu 23 points 7 months ago

So no, if this law came into effect, people would just stop using AI. And imo, they probably should stop for cases like this unless it has direct human oversight of everything coming out of it.

Then you and I agree. If AI can be advertised as a source of information but at the same time can't provide safeguarded information, then there should not be commercial AI. Build tools to help video editing, remove backgrounds from photos, go nuts, but do not position yourself as a source of information.

Though if fixing AI is at all possible, even if we predict it will only happen after decades of technology improvements, it for sure won't happen if we are complacent and do not add such legislative restrictions.

load more comments (2 replies)
[–] [email protected] 6 points 7 months ago (2 children)

The legislation should work like it would before. It's not something new, like filesharing in the Internet was.

Which means - punishment.

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 92 points 7 months ago (4 children)

To everyone that goes to "X" to get the "real", unfiltered news, I hope you can see that it's not that site anymore.

[–] [email protected] 29 points 7 months ago* (last edited 7 months ago) (1 children)

Yet, annoyingly, much of the press still uses it to disseminate news.

I understand journalism is in a rough spot these days and many are there against their will but something needs to change abruptly. This slow exodus is too slow for democracy to survive '24.

load more comments (1 replies)
load more comments (3 replies)
[–] [email protected] 74 points 7 months ago* (last edited 7 months ago) (4 children)

"Grok" sounds like a name of a really stupid ork from a D&D capaign.

[–] [email protected] 55 points 7 months ago

In case you're not familiar, https://en.m.wikipedia.org/wiki/Grok.

It's somewhat common slang in hacker culture, which of course Elon is shitting all over as usual. It's especially ironic since the meaning of the word roughly means "deep or profound understanding", which their AI has anything but.

[–] [email protected] 17 points 7 months ago

Pretty sure its something from a Heilein book namely stranger from a strange world.

[–] [email protected] 16 points 7 months ago (2 children)

Gork is from Stranger in a Strange Land.

[–] umbraroze 19 points 7 months ago (1 children)

Yup. Got also added to the Jargon File, which was an influential collection of hacker slang.

If there's one thing that Elon is really good at, it's taking obscure beloved nerd tidbits and then pigeon-shitting all over them.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 8 points 7 months ago

Grok smash!

[–] [email protected] 38 points 7 months ago (1 children)

Oh, what a surprise. Another AI spat out some more bullshit. I can't wait until companies finally give up on trying to do everything with AI.

[–] CosmicCleric 21 points 7 months ago (5 children)

I can’t wait until companies finally give up on trying to do everything with AI.

I don't think that will ever happen.

They're acceptable of AI driving car accidents that causes harm happen. It's all part of the learning / debugging process to them.

[–] [email protected] 13 points 7 months ago* (last edited 7 months ago) (5 children)

The issue is that the process won't ever stop. It won't ever be debugged sufficiently

EDIT: Due to the way it works. A bit like static error in control theory, you know that for different applications it may or may not be acceptable. The "I" in PID-regulators and all that. IIRC

load more comments (5 replies)
load more comments (4 replies)
[–] Nobody 37 points 7 months ago

Beware, terminally incompetent interns everywhere. Doing something incredibly damaging to your company over social media on your first day is officially a job that’s been taken by AI.

[–] BonesOfTheMoon 32 points 7 months ago (1 children)

Something to do with Twitter and Elon was disinformation? I'm keeling over in shock.

[–] [email protected] 8 points 7 months ago (1 children)

Hope his ass goes to court over this.

[–] BonesOfTheMoon 7 points 7 months ago (1 children)

I hope he has an aneurysm using a device to enlarge his penis or something.

load more comments (1 replies)
[–] [email protected] 17 points 7 months ago (1 children)

I don't really understand this headline

The bot made it? So why was it promoted as trending?

[–] [email protected] 30 points 7 months ago (2 children)

It’s pretty, trending is based on . . . What’s trending by users.

Or as the article explains for those who can’t comprehend what trending means.

Based on our observations, it appears that the topic started trending because of a sudden uptick of blue checkmark accounts (users who pay a monthly subscription to X for Premium features including the verification badge) spamming the same copy-and-paste misinformation about Iran attacking Israel. The curated posts provided by X were full of these verified accounts spreading this fake news alongside an unverified video depicting explosions.

[–] Sludgehammer 23 points 7 months ago* (last edited 7 months ago) (1 children)

Amusingly Grok also spat a headline about how police were being deployed to shoot the earthquake after being exposed to a sarcastic tweet. https://i.imgur.com/qltkEsU.jpeg

[–] [email protected] 12 points 7 months ago (1 children)

It does say it’s likely hyperbole, so they probably just tazed and arrested the earthquake.

Also I’m impressed by the 50,000 to 1,000,000 range for officers deployed. It leaves little room for error.

load more comments (1 replies)
[–] aceshigh 7 points 7 months ago (1 children)

Wow. What a world we live in.

load more comments (1 replies)
[–] [email protected] 17 points 7 months ago

Grok, worlds smartest orc

[–] [email protected] 14 points 7 months ago* (last edited 7 months ago) (1 children)

Same similiar thing happened with major newspapers about 100 / 150 years ago ... governments realized that if any one group or company had control over all the information without regulation, businesses will quickly figure out ways to monetize information for the benefit of those with all the money and power. They then had to figure out how to start regulating newspapers and news media in order to maintain some sort of control and sanity to the entire system.

But like the newspapers of old .... no one will do anything about all this until it causes a major crisis or causes a terrible event ... or events.

In the meantime ... big corporations controlling 99% of all media and news information will stay unregulated or regulated as little as possible until terrible things happen and society breaks down.

load more comments (1 replies)
[–] unreasonabro 12 points 7 months ago
[–] toxicbubble 6 points 7 months ago

Skynet has begun

load more comments
view more: next ›