this post was submitted on 01 Sep 2023
235 points (95.7% liked)

Technology

59740 readers
3958 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
 

There's no way for teachers to figure out if students are using ChatGPT to cheat, OpenAI says in new back-to-school guide::AI detectors used by educators to detect use of ChatGPT don't work, says OpenAI.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] EndOfLine 11 points 1 year ago* (last edited 1 year ago) (1 children)

At the core of learning is for students to understand the content being taught. Using tools and shortcuts doesn't necessarily negate that understanding.

Using chatGPT is no different, from an acidemic evaluation standpoint, than having somebody else do an assignment.

Teachers should already be incorporating some sort of verbal q&a sessions with students to see if their demonstrated in-person comprehension matches their written comprehension. Though from my personal experience, this very rarely happens.

[โ€“] dojan 2 points 1 year ago

That's going on the supposition that a person just prompts for an essay and leaves it at that, which to be fair is likely the issue. The thing is, the genie is out of the bottle and it's not going to go back in. I think at this point it'll be better to adjust the way we teach children things, and also get to know the tools they'll be using.

I've been using GPT and LLAMA to assist me in writing emails and reports. I provide a foundation, and working with the LLMs I get a good cohesive output. It saves me time, allowing me to work on other things, and whoever needs to read the report or email gets a well-written document/letter that doesn't meander in the same way I normally do.

I essentially write a draft, have the LLMs write the whole thing, and then there's usually some back-and-forth to get the proper tone and verbiage right, as well as trim away whatever nonsense the models make up that wasn't in my original text. Essentially I act as an editor. Writing is a skill I don't really possess, but now there are tools to make up for this.

Using an LLM in that way, you're actively working with the text, and you're still learning the source material. You're just leaving the writing to someone else.