this post was submitted on 09 Aug 2023
48 points (60.9% liked)
Technology
59063 readers
3470 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Statistical analysis of existing literary works is certainly not the same sort of thing as generating new literary works based on models trained on old ones.
Almost all of the people who are fearful that AI is going to plagiarize their work don't know the difference between statistical analysis and generative artificial intelligence. They're both AI, and unfortunately in those circles it seems anything even AI-related is automatically bad without any further thought.
I wouldn't characterize statistical analysis as "AI", but sadly I do see people (like those authors) totally missing the differences.
I'm generally hesitant about AI stuff (particularly with the constant "full steam ahead, 'disrupt' everything!" mindset that is far too prevalent in certain tech spheres), but what I saw described in this article looks really, really cool. The one bit I'm hesitant about is where actual pages are presented (since that is actually presenting a segment of the text), but other than that it's really sad to see this project killed by a massive misunderstanding.
There's a subset of artificial intelligence called unsupervised learning which is a form of statistical analysis in which you let an agent find patterns in data for you, as opposed to trying to drive the agent to a desired outcome. I'm not 100% sure that is what the website author was using, but it sounded pretty close to it. It's extremely powerful and not anything like the generative LLMs most people now think of when the words AI are thrown around.
I agree though, it sucks project got killed it seemed super interesting and insightful.
Not sure it matters that much at the end of the day.
And yet it was attacked. The reality is content creators have only contempt for the concept of fair use. Another example is copyright strikes on unfavorable reviews.