Come on man. This is exactly what we have been saying all the time. These “AIs” are not creating novel text or ideas. They are just regurgitating back the text they get in similar contexts. It’s just they don’t repeat things vebatim because they use statistics to predict the next word. And guess what, that’s plagiarism by any real world standard you pick, no matter what tech scammers keep saying. The fact that laws haven’t catched up doesn’t change the reality of mass plagiarism we are seeing …
Just because that happened in this context doesn't automatically mean that this is happening in all contexts. It's absolutely possible, and I'd love to see a conclusive study on this topic, but the example of one LLM version doing this in one application context in one case isn't clear enough proof either way. If a question doesn't have many answers (be they real or fake), and one answer seems to solve the problem with explicit instructions, you'd want the AI system to give the necessary parts of those same instructions, which is what happened here. This is how I expected and understand these systems to work - so I'd love to see examples of what people exactly said that GP is arguing against, because I don't know the argument they are arguing against.
And people like you keep insisting that “AIs” are stealing ideas, not verbatim copies of the words like that makes it ok.
I didn't insist on anything, I wanted an explanation of the position GP is arguing against. I'm of the opinion that any commercial generative AI use should be completely forbidden until a proper framework is built that ensures compensation of sources before anything else - but you don't care about my position, because anything that doesn't resemble "AI bad" must automatically mean "AI good" to you.
Except LLMs have no concept of ideas, and you people keep repeating that even when shown evidence, like this post, that they don’t think.
Can you define "idea" and show me an actual study on this topic? Because I have seen too many examples both for and against all of these grand theses. I don't know where things lie. But you can't show that something is unable to do thing A because it did thing B, without showing that B is diametrically opposed to A. You have to properly define "idea" and define an experiment for that purpose.
And even if they did, repeat with me, this is still plagiarism even if this was done by a human. Stop excusing the big tech companies man
I haven't said that this is or is not plagiarism. Stop being so rabid about anything not explicitly anti-AI - I'm not making pro-AI points.