this post was submitted on 21 Aug 2023
668 points (95.4% liked)
Technology
60129 readers
3371 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Is that why we are being told to "fear" AI? Because it can't be easily monitised by those who worry about such things.
No, I don't think so. The ability to monetize works created by AI is not impacted by this case - at least for most legitimate uses. Unlike what the title says the ruling of the case does not make all AI work uncopyrightable. Only states that AI work with no human input is not copyrightable and copyrighted works only apply to human creations. It leaves the debate for how much human involvement is required up to future cases as in this case there was no human input into the generated work (the person claiming the copyright even confirmed this).
So, all this case says is that you cannot claim copyright for owning or using an AI to create a work where you have no input into the process.
But even if you cannot copyright any AI generated work then you can still monetise it. You can sell works that are in the public domain, you can use and remix works in the public domain and (if different enough) you can claim copyright on that work. So at most this ruling shuts down some copyright tolls from doing no work and suing a bunch of people for infringing on their mass produced content that has had no human input.
The fear over AI is more because of the large unknowns about how it is going to be used and what damage it might do to society if used wrong. We are already seeing many people get into trouble for using AI generated content or answer to questions without verifying it at all. Even cases where lawyers have cited made up cases because they thought they could trust its output and were too lazy to actually verify the work. Or school boards banning books from schools based on the false output of questions posed to AI.