this post was submitted on 15 Oct 2024
544 points (96.9% liked)
Fuck AI
1507 readers
257 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 9 months ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
According to Oxford, they define plagiarism as,
I think that covers 100% of your argument here.
LLMs can't provide reference to their source materials without opening the business behind it to litigation. this means the LLM can't request consent.
the child, in this case, cannot get consent from the original author that wrote the content that trained the LLM, cannot get consent from the LLM, and incorporated the result of LLM plagiarism into their work and attempted to pass it off as their own.
the parents are entitled and enabling pricks and don't have legal ground to stand on.
LLMs are certainly trained without consent, but they exist to spot common patterns. It's only likely to plagiarise if that text is also similar to lots of other text.
In fact, the academic practice of references and exact quotes has actually increased the tendency of statistical models to "plagiarise".
LLM will continue to be a useful academic tool. We just have to learn how best to incorporate them into our testing.
After reading that the exam rules basically said not to use chatgpt or similar, I completely agree.