this post was submitted on 07 Nov 2023
146 points (82.3% liked)

Technology

59674 readers
3211 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] joystick 87 points 1 year ago (7 children)
[–] [email protected] 7 points 1 year ago

Believable because:

However, the system is highly specialized for scientific journal articles. When presented with real articles from university newspapers, it failed to recognize them as being written by humans.

So outside of its purview? Agree.

load more comments (6 replies)
[–] EvilBit 80 points 1 year ago (1 children)

As I understand it, one of the ways AI models are commonly trained is basically to run them against a detector and train against it until they can reliably defeat it. Even if this was a great detector, all it’ll really serve to do is teach the next model to beat it.

[–] [email protected] 27 points 1 year ago (2 children)

That’s how GANs are trained, and I haven’t seen anything about GPT4 (or DALL-E) being trained this way. It seems like current generative AI research is moving away from GANs.

[–] EvilBit 4 points 1 year ago

I know it’s intrinsic to GANs but I think I had read that this was a flaw in the entire “detector” approach to LLMs as well. I can’t remember the source unfortunately.

[–] KingRandomGuy 4 points 1 year ago

Also one very important aspect of this is that it must be possible to backpropagate the discriminator. If you just have access to inference on a detector of some kind but not the model weights and architecture itself, you won't be able to perform backpropagation and therefore can't generate gradients to update your generator's weights.

That said, yes, GANs have somewhat fallen out of favor due to their relatively poor sample diversity compared to diffusion models.

[–] demonsword 75 points 1 year ago* (last edited 1 year ago) (1 children)

No references whatsoever to false positive rates, which I'd assume are quite high. Also, they single out that they built this detector to catch chemistry-related AI-generated articles

[–] [email protected] 14 points 1 year ago

If you call heads 100% of the time, you'll be 100% accurate on predicting heads in a coin toss.

[–] [email protected] 50 points 1 year ago (1 children)

I really really doubt this, openai said recently that ai detectors are pretty much impossible. And in the article they literally use the wrong name to refer to a different AI detector.

Especially when you can change Chatgpt's style by just asking it to write in a more casual way, "stylometrics" seems to be an improbable method for detecting ai as well.

[–] [email protected] 3 points 1 year ago (1 children)

It's in openai's best interests to say they're impossible. Completely regardless of the truth of if they are, that's the least trustworthy possible source to take into account when forming your understanding of this.

[–] [email protected] 6 points 1 year ago

openai had their own ai detector so I don't really think it's in their best interest to say that their product being effective is impossible

[–] [email protected] 37 points 1 year ago (2 children)

Willing to bet it also catches non-AI text and calls it AI-generated constantly

[–] [email protected] 15 points 1 year ago

The best part of that if AI does a good job of summarizing, then anyone who is good at summarizing will look like AI. Like if AI news articles look like a human wrote it, then a human written news article will look like AI.

[–] [email protected] 12 points 1 year ago* (last edited 1 year ago)

The original paper does have some figures about misclassified paragraphs of human-written text, which would seem to mean false positives. The numbers are higher than for misclassified paragraphs of AI-written text.

[–] TropicalDingdong 31 points 1 year ago (4 children)

This is kind-of silly.

We will 100% be using AI to generate papers now and in the future. If the AI can catch any wrong conclusions or misleading interpretations, that would be helpful.

Not using AI to help you write at this point is you wasting valuable time.

[–] [email protected] 10 points 1 year ago* (last edited 1 year ago) (3 children)

I do a lot of writing of various kinds, and I could not disagree more strongly. Writing is a part of thinking. Thoughts are fuzzy, interconnected, nebulous things, impossible to communicate in their entirety. When you write, the real labor is converting that murky thought-stuff into something precise. It's not uncommon in writing to have an idea all at once that takes many hours and thousands of words to communicate. How is an LLM supposed to help you with that? The LLM doesn't know what's in your head; using it is diluting your thought with statistically generated bullshit. If what you're trying to communicate can withstand being diluted like that without losing value, then whatever it is probably isn't meaningfully worth reading. If you use LLMs to help you write stuff, you are wasting everyone else's time.

[–] [email protected] 6 points 1 year ago* (last edited 1 year ago) (1 children)

Yeah, I agree. You can see this in all AI generated stuff - none of it has any purpose, no intention.

People who say it's saving them time, I mean I have to ask what these people are doing that can be replaced by AI and whether they're actually any good at it, and whether the AI has improved their work or just made it happen faster at the expense of quality.

I have turned off all predictive writing of any kind on my devices, it gets in my head and stops me from forming my own thoughts. I want my authentic voice and I can't stand the idea of a machine prompting me with its own idea of what I want to say.

Like... we're prompting the AI, but are they really prompting us?

[–] [email protected] 3 points 1 year ago

Amen. In fact, I wrote a whole thing about exactly this -- without an LLM! Like most things I write, it took me many hours and evolved many times, but I take pleasure in communicating something to the reader, in the same way that I take pleasure in learning interesting things reading other people's writing.

load more comments (2 replies)
load more comments (2 replies)
[–] LunchEnjoyer 16 points 1 year ago

Didn't OpenAI themselves state some time ago that it isn't possible to detect it?

[–] Deckweiss 15 points 1 year ago* (last edited 1 year ago) (8 children)

I don't understand. Are there places where using chatGPT for papers is illegal?

The state where I live explicitly allows it. Only plagiarism is prohibited. But making chatGPT formulate the result of your scientific work, or correct the grammar or improve the style, etc. doesn't bother anybody.

If you use chatGPT you should still read over it, because it can say something wrong about your results and run a plagiarism tool on it because it could unintentionally do that. So whats the big deal?

[–] alienanimals 20 points 1 year ago (2 children)

It's not a big deal. People are just upset that kids have more tools/resources than they did. They would prefer kids wrote on paper with pencil and did not use calculators or any other tool that they would have available to them in the workforce.

[–] [email protected] 8 points 1 year ago (3 children)

There's a difference between using ChatGPT to help you write a paper and having ChatGPT write the paper for you. One invokes plagiarism which schools/universities are strongly against.

The problem is being able to differentiate between a paper that's been written by a human (which may or may not be written with ChatGPT's assistance) and a paper entirely written by ChatGPT and presented as a student's own work.

I want to strongly stress that in the latter situation, it is plagiarism. The argument doesn't even involve the plagiarism that ChatGPT does. The definition of plagiarism is simple, ChatGPT wrote a paper, you the student did not and you are presenting ChatGPT's paper as your own, ergo plagiarism.

load more comments (2 replies)
[–] [email protected] 8 points 1 year ago (2 children)

Teachers when I was little "You won't always have a calculator with you" and here I am with a device more powerful than what sent astronauts to the moon in my pocket 24/7

[–] [email protected] 3 points 1 year ago

1% battery intensifies

[–] LukeMedia 2 points 1 year ago

Fun fact for you, many credit-card/debit-card chips alone are comparably powerful to the computers that sent us to the moon.

It's mentioned a bit in this short article about how EMV chips are made. This summary of compute power does come from a company that manufactures EMV chips, so there is bias present.

[–] [email protected] 3 points 1 year ago (2 children)

Why should someone bother to read something if you couldn’t be bothered to write it in the first place? And how can they judge the quality of your writing if it’s not your writing?

[–] Deckweiss 2 points 1 year ago (2 children)

Science isn't about writing. It is about finding new data through scientific process and communicating it to other humans.

If a tool helps you do any of it better, faster or more efficiently, that tool should be used.

But I agree with your sentiment when it comes to for example creative writing.

[–] [email protected] 2 points 1 year ago* (last edited 1 year ago)

Science is also creative writing. We do research and write the results, in something that is an original product. Something new is created; it's creative.

An LLM is just reiterative. A researcher might feel like they're producing something, but they're really just reiterating. Even if the product is better than what they would have produced themselves it is still more worthless, as it is not original and will not make a contribution that haven't been made already.

And for a lot of researchers, the writing and the thinking blend into each other. Outsource the writing, and you're crippling the thinking.

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 3 points 1 year ago

I don’t think people are arguing against minor corrections, just wholesale plagiarism via AI. The big deal is wholesale plagiarism via AI. Your argument is as reasonable as it adjacent to the issue, which is to say completely.

load more comments (5 replies)
[–] Something_Complex 10 points 1 year ago (2 children)

I'm gonna need something more then that too belive it

[–] macarthur_park 4 points 1 year ago (1 children)

The article is reporting on a published journal article. Surely that’s a good start?

[–] KingRandomGuy 2 points 1 year ago

I haven't read the article myself, but it's worth noting that in CS as a whole and especially ML/CV/NLP, selective conferences are generally seen as the gold standard for publications compared to journals. The top conferences include NeurIPS, ICLR, ICML, CVPR for CV and EMNLP for NLP.

It looks like the journal in question is a physical sciences journal as well, though I haven't looked much into it.

[–] [email protected] 3 points 1 year ago (2 children)

Isnt this like a constant fight between people who develop anti-ai-content and the internet pirates who develop anti-anti-ai-content? Pretty sure the piratea always win.

[–] [email protected] 3 points 1 year ago (1 children)

You sully the good name of Internet Pirates, sir or madam. I'll have you know that online pirates have a code of conduct and there is no value in promulgating an anti-ai or anti-anti-ai stance within the community which merely wishes information to be free (as in beer) and readily accessible in all forms and all places.

You are correct that the pirates will always win, but they(we) have no beef with ai as a content generation source. ;-)

load more comments (1 replies)
load more comments (1 replies)
[–] [email protected] 3 points 1 year ago* (last edited 1 year ago) (1 children)

I say we develop a Voight-Kampff test as soon as possible for detecting if we're speaking to an AI or an actual human being when chatting or calling a customer representative of a company.

Edit: I made a mistake.

[–] agent_flounder 2 points 1 year ago (1 children)

if we're speaking to a real person or an actual human being

Ummm ...

[–] [email protected] 2 points 1 year ago

Hahahaha OMG. I fixed it. Thanks!

[–] [email protected] 2 points 1 year ago

they still can't capture data written from Ai over websites like ' https://themixnews.com/' https://themixnews.com/cj-amos-height-age-brother/

load more comments
view more: next ›