this post was submitted on 15 Oct 2024
544 points (96.9% liked)

Fuck AI

1507 readers
250 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 9 months ago
MODERATORS
 

A Massachusetts couple claims that their son's high school attempted to derail his future by giving him detention and a bad grade on an assignment he wrote using generative AI.

An old and powerful force has entered the fraught debate over generative AI in schools: litigious parents angry that their child may not be accepted into a prestigious university.

In what appears to be the first case of its kind, at least in Massachusetts, a couple has sued their local school district after it disciplined their son for using generative AI tools on a history project. Dale and Jennifer Harris allege that the Hingham High School student handbook did not explicitly prohibit the use of AI to complete assignments and that the punishment visited upon their son for using an AI tool—he received Saturday detention and a grade of 65 out of 100 on the assignment—has harmed his chances of getting into Stanford University and other elite schools.

Yeah, I'm 100% with the school on this one.

you are viewing a single comment's thread
view the rest of the comments
[–] Knock_Knock_Lemmy_In -5 points 2 months ago (4 children)

If I used a calculator on a maths test I should only be penalised if the rules stated no calculators.

[–] GreenKnight23 6 points 2 months ago* (last edited 2 months ago) (1 children)

According to Oxford, they define plagiarism as,

Presenting work or ideas from another source as your own, with or without consent of the original author, by incorporating it into your work without full acknowledgement.

I think that covers 100% of your argument here.

LLMs can't provide reference to their source materials without opening the business behind it to litigation. this means the LLM can't request consent.

the child, in this case, cannot get consent from the original author that wrote the content that trained the LLM, cannot get consent from the LLM, and incorporated the result of LLM plagiarism into their work and attempted to pass it off as their own.

the parents are entitled and enabling pricks and don't have legal ground to stand on.

[–] Knock_Knock_Lemmy_In 0 points 2 months ago

LLMs are certainly trained without consent, but they exist to spot common patterns. It's only likely to plagiarise if that text is also similar to lots of other text.

In fact, the academic practice of references and exact quotes has actually increased the tendency of statistical models to "plagiarise".

LLM will continue to be a useful academic tool. We just have to learn how best to incorporate them into our testing.

the parents are entitled and enabling pricks and don't have legal ground to stand on.

After reading that the exam rules basically said not to use chatgpt or similar, I completely agree.

[–] [email protected] 5 points 2 months ago (1 children)

Is AI more like a calculator, or more like copy/pasting Wikipedia articles without attribution?

[–] Knock_Knock_Lemmy_In 0 points 2 months ago (1 children)

It's not really a calculator because it gives different answers. Newer moldels can give attribution (e.g. bing copilot).

My opinion is that LLMs are not going to go away. Testing needs to adapt to focus on the human element. Marks are no longer lost for bad handwriting.

[–] [email protected] 3 points 2 months ago (1 children)

Just like when I was a kid using Wikipedia for research when it wasn't acceptable, the expectation should be that you use it to understand the material and then follow it to the source material to read that or at least find a relevant quote that lets you repeat that wikipedia said in your own words with attribution.

Copying wiki, or copying the output of an LLM, are both similarly academically fraudulent. LLMs are just more likely to also be wrong.

[–] Knock_Knock_Lemmy_In 1 points 2 months ago

Mostly Agreed. I think the "in your own words" part will be debated strongly over the next few years. Will proof of writing your own prompt be sufficient?

[–] [email protected] 3 points 2 months ago (1 children)

Do you think you should be penalised if you got ChatGPT to sit the math test for you?

[–] Knock_Knock_Lemmy_In -1 points 2 months ago

No, you'd be penalising yourself (except if you got the wolfram alpha plugin working).

Professors should be setting exams that chatgpt can't hope to solve.

[–] Red_October 3 points 2 months ago (1 children)

And what if you had an app on your phone that let you just take a picture of the question, and write out the answer it gave you? A calculator still requires that you know what to input, and at the level of math where a calculator really is just easy mode, it absolutely would specifically prohibit them.

[–] Knock_Knock_Lemmy_In -1 points 2 months ago

And what if you had an app on your phone that let you just take a picture of the question, and write out the answer it gave you?

At college level, the question setter should ensure they are testing something where this is not possible.