this post was submitted on 02 Apr 2024
45 points (95.9% liked)

BecomeMe

753 readers
1 users here now

Social Experiment. Become Me. What I see, you see.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 3 points 7 months ago (1 children)

This is the best summary I could come up with:


Right after the blockbuster release of GPT-4, the latest artificial intelligence model from OpenAI and one of the most advanced in existence, the language of scientific research began to mutate.

Source: “Monitoring AI-Modified Content at Scale: A Case Study on the Impact of ChatGPT on AI Conference Peer Reviews”

If this makes you uncomfortable — especially given A.I.’s current unreliability — or if you think that maybe it shouldn’t be A.I.s reviewing science but the scientists themselves, those feelings highlight the paradox at the core of this technology: It’s unclear what the ethical line is between scam and regular usage.

Isn’t it possible that human culture contains within it cognitive micronutrients — things like cohesive sentences, narrations and character continuity — that developing brains need?

companies are refusing to pursue advanced ways to identify A.I.’s handiwork — which they could do by adding subtle statistical patterns hidden in word use or in the pixels of images.

Similarly, right now teachers across the nation have created home-brewed output-side detection methods, like adding hidden requests for patterns of word use to essay prompts that appear only when copied and pasted.


The original article contains 1,636 words, the summary contains 188 words. Saved 89%. I'm a bot and I'm open source!

[–] Schneakyyyyy 0 points 7 months ago