this post was submitted on 08 Jun 2024
361 points (97.9% liked)
Technology
60131 readers
3826 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It's not a monster. It doesn't vaguely resemble a monster.
It's a ridiculously simple tool that does not in any way resemble intelligence and has no agency. LLMs do not have the capacity for harm. They do not have the capability to invent or discover (though if they did, that would be a massive boon for humanity and also insane to hold back). They're just a combination of a mediocre search tool with advanced parsing of requests and the ability to format the output in the structure of sentences.
AI cannot do anything. If your concern is allowing AI to release proteins into the wild, obviously that is a terrible idea. But that's already more than covered by all the regulation on research in dangerous diseases and bio weapons. AI does not change anything about the scenario.
I largely agree, current LLMs add no capabilities to humanity that it did not already possess. The point of the regulation is to encourage a certain degree of caution in future development though.
Personally I do think it's a little overly broad. Google search can aid in a cyber security attack. The kill switch idea is also a little silly, and largely a waste of time dreamed up by watching too many Terminator and Matrix movies. While we eventually might reach a point where that becomes a prudent idea, we're still quite far away.
We're not anywhere near anything that has anything in common with human level intelligence, or poses any threat.
The only possible cause for support of legislation like this is either a completely absence of understanding of what the technology is combined with treating Hollywood as reality (the layperson and probably most legislators involved in this), or an aggressive market control attempt through regulatory capture by big tech. If you understand where we are and what paths we have forward, it's very clear that there's only harm that this can do.