this post was submitted on 29 Sep 2024
212 points (96.5% liked)

Technology

60311 readers
2915 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
all 37 comments
sorted by: hot top controversial new old
[–] [email protected] 63 points 3 months ago (4 children)

The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state’s electric grid or help build chemical weapons.

How exactly do LLMs do that? If you've given an LLM's pseudorandom output control over your electrical grid, no regulation will mitigate your stupidity.

[–] bamfic 12 points 3 months ago

Could he understand the halting problem? I doubt he does, but the legislators evidently don't either

[–] [email protected] 6 points 3 months ago (1 children)

I think it's more about asking it the steps to create a bomb or how to disrupt the grid, for example, where to cut the major edges.

[–] [email protected] 15 points 3 months ago (2 children)

asking it the steps to create a bomb

That sounds like a self-correcting issue right there

[–] dual_sport_dork 16 points 3 months ago (1 children)

That, and the Internet has been teaching people how to create bombs since the dial-up days. I don't predict that LLM's will be either a benefit or a detriment to that particular strain of natural selection.

[–] [email protected] 2 points 3 months ago

anyone remember the anarchist's cookbook?

[–] [email protected] 0 points 3 months ago (1 children)

Still a public safety issue.

[–] [email protected] 4 points 3 months ago (1 children)

Is it more of a public safety issue than if they actually build a working one from a legit bomb manual and deploy it?

[–] [email protected] 0 points 3 months ago (1 children)

No, but I think it could make the knowledge more easily available which increases the risk that it may happen.

[–] [email protected] 3 points 3 months ago (1 children)
[–] [email protected] 1 points 3 months ago (1 children)

I think I heard about it before, but instead of having to remember that, I could just ask an uncensored LLM.

[–] [email protected] 3 points 3 months ago

The actual point was, bomb making instructions have been floating around on search engine results since the days of dial up. That particular manuscript itself has existed since before the days of the Internet. There's nothing cgpt could give you that you couldn't have found by typing the same query into Google. Getting the instructions is literally the easiest, least effort, least risk part of building a bomb.

[–] UnderpantsWeevil 5 points 3 months ago

How exactly do LLMs do that?

If you hook an LLM up as an interface replacement for a manual/analog Power Plant interface and start asking the translator to intuit decisions based on fuzzy inputs, you can create a cascade of errors that result in grid failure.

If you’ve given an LLM’s pseudorandom output control over your electrical grid, no regulation will mitigate your stupidity.

This rule would prevent a business or public regulator from doing such a thing without proving out safeguards.

And the governor vetoed it.

[–] brucethemoose 47 points 3 months ago* (last edited 3 months ago)

Good.

All this bill would have done is given OpenAI/Anthropic and such an effective monopoly (and probably destroy the planet with their insane scaling schemes) by destroying the open model ecosystem. I think fediverse vs. corporate social media is a good analogy, and this is kinda like sniping the Fediverse because it's "too dangerous" if it gets too big, without actually being specific on how to deal with that, but actually sniping it because its a competitive threat.

And yes, OpenAI opposed this, but that was lip service. Don't believe a word that comes out of Altman's mouth.

[–] AbouBenAdhem 25 points 3 months ago* (last edited 3 months ago)

Newsom on Sunday instead announced that the state will partner with several industry experts, including AI pioneer Fei-Fei Li, to develop guardrails around powerful AI models.

That’s reassuring—Li is one of the best-qualified people for the role, and she isn’t in the pocket of any of the major players.

[–] NeoNachtwaechter 20 points 3 months ago* (last edited 3 months ago)

Nancy Pelosi, argued that the bill would “kill California tech” and stifle innovation.

As long as the critics of a safety regulation need nothing better than such stupid, short-sighted arguments, nobody will ever be safe.

[–] [email protected] 9 points 3 months ago* (last edited 3 months ago)

Trying to do the same thing in EU I guess. It's funny how the tech giants are mad at it and not releasing their latest energy black hole data pumps in EU. It's like cocaine gangs threatening us to not sell in our countries if we don't change the laws. No, thanks.

[–] [email protected] 8 points 3 months ago

Meta: I’ve noticed a lot of VOA links on Lemmy lately, and I’d like to understand why. As I understand it, VOA is essentially a national propaganda news organization targeting an international audience (similar to RT). Why is that a good source for article sharing? Especially in the case of the article at hand, which is just a VOA republication of an Associated Press piece that could have been linked originally.

[–] ShittyBeatlesFCPres 3 points 3 months ago (1 children)

I feel like all the “A.I.” safety talk is marketing. They act like they created some Sci Fi shit that could end humanity. It’s much more likely Silicon Valley billionaires donating to people like Trump ends humanity.

I’m not saying generative models are useless. I use them for some stuff. Nothing that justifies the carbon footprint, though, and climate change might end humanity. But fucking Siri on steroids is not my fear about the dangers of “A.I.”

[–] atrielienz 1 points 3 months ago

By the right person, it can be used to steal the likeness of an individual and used to commit crimes like fraud (see using generative AI LLM'S to fake a person's voice to convince a family member they are in trouble etc). While I do think these bills are being offered up by people who don't understand the technology they're trying to legislate, I don't think such bills are safety marketing anymore than I do about aircraft safety or road safety legislation.

[–] Fredselfish 1 points 3 months ago

Is all he do is veto bills good or bad? Every fucking article it's about other bill he killed.

[–] [email protected] -5 points 3 months ago (3 children)

Fun fact, Gavin Nelson is actually a living AI. He doesn't always get the answer right but he does always have words that sound like a plausible answer. He also creates artwork in his mind.

[–] ivanafterall 4 points 3 months ago (1 children)

Gavin Nelson, that's the guy that did "After the Rain," right? Love that guy. Fitting song for these trying times.

[–] [email protected] 2 points 3 months ago

Ah yes, the sequel to "Are Those Rain Clouds?" Thought provoking films.

[–] FlyingSquid 4 points 3 months ago

Is that Major Nelson's first name?

[–] UnderpantsWeevil 0 points 3 months ago

Gruesome Newsome's veto pen is to the right of Reagan's.