this post was submitted on 05 Dec 2023
76 points (97.5% liked)

Technology

59193 readers
2434 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 1 points 11 months ago (1 children)

Non of that is possible with FOSS AI code, if it's out there in the web. There will only be guidelines on AI available to public and companies using AI in their products, but the rest of the more tech savvy people will be uneffected.

[–] NeoNachtwaechter 1 points 11 months ago (2 children)

Non of that is possible

That is not enough. Think harder.

Today's existing AI's are child's play, but it's not going to be like that for long.

One day it will be neccessary to do something for real, when some AI is causing harm to the public (regardless if a person has intended it or not), and we need to decide what to do then.

[–] jaybone 1 points 11 months ago

Maybe they could be handled like a virus or an exploit.

[–] [email protected] 0 points 11 months ago* (last edited 11 months ago)

We already have issue to stop people believing fake news in writing form. I don't see how we can stop people believing well made fake news with audio and video.

Personally I think every country needs some form of gov independent news media, to at least have some source of information available that is majorly trustworthy.

Everything profit oriented will result in propagation of missinformation as long as it generates clicks.

Oh and don't let AI control weapons, worst mistake one can make. We don't even manage self driving cars, let alone a drone with mass killing weapons.

Punishment won't reflect the complexity anymore. Say some 14 years old creates a fake video of the president declaring war, a war happens for real because it goes viral, millions die. Is this 14 years now going to prison for life? Would a 16 or 18 years old? What I'm trying to say, the level of resistance is a totally different than picking up a gun and shooting someone. A simple bad day or a stupid child joke, soon has the power of a well planned and expensive propaganda campaign.

To block commercial products from allowing certain actions could be a start, but not a total fix. Say an AI filter for faces of public figures or keyword filters for the LLM/chatbots. Not perfect but better than nothing.

AI is very broad, you can put everything with software into that topic too. Also it's not easy to define what is AI and what not. A rule based system is already some form of dumb AI. So every law effects pretty much everything else.

I'm pretty sure we get a shit load of unprepared governments, creating all sorts of surveillance laws. A international organisation could prevent the worst of it.

We better start educating people yesterday on how AI works, the consequences and the ways to avoid blind actions. Excuse me, we have climate to save...