Also happens if the post coincidentally shows up in all and some particularly anti AI people see it, even when it's isolated in a designated community like this one.
ClamDrinker
That's pretty metal
Thanks for clarifying. I get your point, I honestly dont doubt someone or a group with such opinions exists out there, I just dont think it represents anywhere near a critical mass.
Sadly, when there's big money to be made such as for blockbusters, even some human work before AI was already pretty 'sanitized' or 'toned down' in terms of human creativity, as it must be as uncontroversial and mainstream appealing as possible. So yes if AI got good enough it would definitely be used by some of those companies.
However, I dont see any path for current AI technology to get there without at least 1 or 2 breakthrough similar to the advent of current AI technology.
I also dont think it will replace anything beyond the works of companies with great profit incentive. We have a massive amount of communities where human creativity is central in all shapes and forms, producing works that arent appealing to everyone, but to the people it resonates with, it is so uniquely special that its irreplaceable. This kind of art thrives on it's human creativity rather than it's ability to make money. The human desire to produce and consume art that resonates with them is so strong it wont go anywhere as long as people have the time and ability to produce it.
Rest assured, there is basically no talk of replacing anyone with AI in my corner of the creative industry.
Should the day come that AI truly becomes that good it can compete with human creativity, its likely that AI will have become far more human in terms of how it creates art, and would start exhibiting the same tendencies to share human experiences and memories. Then the difference will start to fade and indeed we might go the way of the horses, but such a scenario is essentially sci-fi right now - we may never even get close and art might have made many radical shifts before we get there. And like the camera didnt kill hand painted portraits, there will still be a place for human creativity, just less.
But so long as the incentive is there, it might eventually happen. And so we should be ready to safeguard creativity in some manner along the way. But currently the most effective ways of doing so entail mostly to curb our capitalistic society, and not at the technology. Because doing so could in the worst case lock creatives out from the technology and start a race for the capability to keep up, and large companies would surely win out if we let them.
They have more means of doing things and more data than smaller creators, and AI does seem to pull some of that power back to smaller creators, hence why even thought it might seem big companies are all pro AI, dont be surprised if they are totally fine taking a powerful tool away so they can take it just for themselves.
I'm someone who talks about AI a lot on lemmy, people might call me pro AI although I consider myself to be neither pro nor anti, but admittedly, optimistic about AI in general. I work with people in the creative industry, artists, writers, designers, you name it.
As others have mentioned already, your question to my knowledge does not reflect most people's view on AI neither online and even less so in real life. And I talk and participate in communities that are overwhelmingly pro AI. The "AI bros" you mention sound like caricatures to me.
There are some who have become bitter by lies and misinformation spread about AI that are intentionally hateful as a kind of reverse gotcha, but thats about it. You have those on the anti AI side as well for different reasons.
I dont consider AI to be anywhere close to being a threat to the industry, other than indirectly through the forces of capitalism and mismanagement. Your question indeed seems very insane to me. Most people that use and talk about AI to me seem more interested in using it to make new creative works, or enhance existing works to greater depth in the same time. Creative people are human too and have limited time, and often their time is already cut short by deadlines and their work has been systematically undervalued even before AI.
AI as it currently stands on its own simply has no feeling of direction. Without much effort you can get very pretty, elegant, interesting, but ultimately meaningless things from it. This cannot replace anyone, because such content while intriguing doesnt capture attention for long. It also cannot do complex tasks such as discussing with stakeholders or remaining consistent across work and feedback.
With a creative person at the wheel of the AI though, something special can happen. It can give AI the direction it needs to bring back that meaning.
This is a perspective a lot of people miss, since they only see AI as ChatGPT or Midjourney, not realizing that these are proprietary (not open source) front ends to the technology that essentially hide all the controls and options the technology has, because these things are essentially a new craft on their own and to this day very little people are even in the progress of mastering them.
Everyone knows about prompts, but you can do much more than that depending on the model. Some image models allow you to provide your own input image, and even additional images that control aspects of the image like depth, layout, outlines. And text models allow you to pack a ton of pre existing data that completely guide what it will output next, as well as provide control over the internal math that decides how it comes to its guess for the next word.
Without a creative and inventive person behind the wheel, you get generic AI material we all know. And with such a person, you get material at times indistinguishable from normal material. These people are already plentiful in the creative industry, and they are not going anywhere, and new people that meet this criteria are always welcome. Art is for everyone, and especially those who are driven.
Really the only threat to the creative industry in regards to AI is that some wish to bully and coerce those who use the technology into submission and force them to reject it, and even avoid considering it altogether like dogma. This creates a submissive group that will never learn how to operate AI models. Should AI ever become neccesary to work in the creative industry (it currently doesnt look like it) these people will be absolutely decimated by the ones that kept an open mind, and more importantly, the youth of tomorrow that always is more open to new technologies. This is a story of the ages whenever new technology comes around, as it never treats those that reject it kindly, if it sticks around.
The loom and the Luddites, cars and horses, cameras and painters, mine workers and digging machines, human calculators and mechanical calculators, the list goes on.
So no, being pro AI doesnt neccesarily mean you are participating in the downfall of the creative industry. Neither does being anti AI. But spreading falsehoods and stifling healthy discussion, that can kill any industry except those built on dishonesty.
While I agree that the AI they will implement will likely not be very effective, it doesn't have to be to cause massive human suffering. Eg. Google incorrectly marking exposed photos of your kid for your doctor as CSAM. There's also no guarantee that once these companies finally wake the fuck up (If they're not already completely aware what they're doing is messed up) that they will close these holes they're punching, and that could mean they could replace AI with a mass surveillance tool at any point without you knowing. Nobody should be a fan of this.
Stocks for what? AI? I can't have stocks for a technology. I could get stocks in companies that use AI, but the only ones that are on the stock market I'd rather die than support a single penny to since they abuse the technology (and technology in general). But they are not the only ones using the technology. I'm not really a fan of stocks to begin with, profit focused companies are a plague in my opinion.
Seems a bit strange to blame AI for this. Meta has always been garbage and using technology to it's worst effects.
While I'm not particularly fond of using AI for any kind of truthful information, this post reeks of the classic "Quit having fun!" meme. Your value judgement of AI is no more valid than anyone elses, and honestly in my opinion, very misdirected and anger fueled.
It's in everyone's best interest for AI content to be honestly declared. You are almost certainly already consuming AI content from somewhere without knowing it because angry hate mobs have conditioned people to just lie and obfuscate their AI usage to avoid being the target of hate. And if not, you will eventually due to the power of the technology, as the entire creative industry is already silently integrating it, as everyone with an open mind knows, but the benefit of honesty towards closed minded angry people is none, and that situation is a shame.
Good AI usage is impossible to detect, and we should encourage honesty in regards to it.
You had me in the first half, but then you lost me in the second half with the claim of stolen material. There is no such material inside the AI, just the ideas that can be extracted from such material. People hate their ideas being taken by others but this happens all the time, even by the people that claim that is why they do not like AI. It's somewhat of a rite of passage for your work to become so liked by others that they take your ideas, and every artist or creative person at that point has to swallow the tough pill that their ideas are not their property, even when their way of expressing them is. The alternative would be dystopian since the same companies we all hate, that abuse current genAI as well, would hold the rights to every idea possible.
If you publicize your work, your ideas being ripped from it is an inevitability. People learn from the works they see and some of them try to understand why certain works are so interesting, extracting the ideas that do just that, and that is what AI does as well. If you hate AI for this, you must also hate pretty much all creative people for doing the exact same thing. There's even a famous quote for that before AI was even a thing. "Good artists copy, great artists steal."
I'd argue that the abuse of AI to (consider) replacing artists and other working creatives, spreading of misinformation, simplifying of scams, wasting of resources by using AI where it doesn't belong, and any other unethical means to use AI are far worse than it tapping into the same freedom we all already enjoy. People actually using AI for good means will not be pumping out cheap AI slop, but are instead weaving it into their process to the point it is not even clear AI was used at the end. They are not the same and should not be confused.
Plagiarism is not the same as copyright infringement. Why you think people probably plagiarize is doubly irrelevant then.
I never claimed it was, but as I said before, it is irrelevant because copyright infringement differs in places depending on the local laws, but plagiarism is usually the concept that guides the ethical position from which those laws are produced, which is why yes, it's relevant.
Show me literally any example of the defendant’s use of “analysis” having any impact whatsoever in a copyright infringement case or a law that explicitly talks about it, or just stop repeating that it is in any way relevant to copyright.
This is an unreasonable request, and you know it to be. Again, we don't share the same laws and different jurisdictions provide different exceptions like fair use, fair dealing, or just straight up exclusion from copyright for their use. But it is wholly besides my argument. You can look at any piece of modern media that exists in the same space and see ideas the two share, while not sharing the same expression of that idea. How some characters fulfill the same purpose, dress the same way, or have similar personalities. You are free to make a book with a plumber, a mustached man, someone wearing a red hat with the letter M on it, and someone that goes to save a princess from a castle, but if they're not the same person they are most likely not considered to be the protected expression of Mario. Same ideas that make up Mario, one infringing, the other not.
Nobody goes to court over this because EVERYONE takes each others ideas, "Good artists copy, great artists steal". It's only when you step on the specific expression of an idea that it becomes realistically actionable, and at that point transformativeness is definitely discussed almost every single time, because it is critical to determining the copyright was actually infringed, or if not.
Wrong. The “all together” and “without adding new patterns” are not legal requirements. You are constantly trying to push the definition of copyright infringement to be more extreme to make it easier for you to argue.
I'm sorry but, are you really being this dishonest? I've mentioned EXPLICITLY in my last comment that I wasn't giving a definition of copyright infringement, because it's besides the point, and not what I'm claiming. Yet here you are saying I am "trying to push" a definition. We are not lawyers or law scholars speaking to each other, I am having a discussion with you as another anonymous person on a message board.
Unfortunately, an AI has no concept of ideas, and it simply encodes patterns, whatever they might happen to be.
You are just arguing semantics and linguistics, it's meaningless. We are not talking technical specifics, not even a specific model, nor a specific technique to specific exactly how the information is encoded. It's a rough concept of "ideas" / "data" / "patterns": information. And AI definitely has that.
Again, you’re morphing the discussion to make an argument.
You mean, I'm making an argument. Because yes. I am. I don't see why this negative framing is necessary nor why this is noteworthy enough to bring up, unless you really just want to make me look bad for no apparent reason.
Mario’s likeness has to be encoded into the model in some way. Otherwise, this would not have been the image generated for “draw an italian plumber from a video game”. There is absolutely nothing in the prompt to push GPT-4 to combine those elements. There are also no “new” patterns, as you put it. That’s exactly the point of the article. As they put it:
Yes, there is some idea/pattern of "Mario-ness" in the model, I said that. This was not me trying to say no material of Mario was used in training, but that it's not like someone pasted direct images of Mario in there, but that AI models makes logical connections between concepts and even for things we cannot put a good name to does it make those connections, and will allow you to prompt for them, but that does not mean you should.
Clearly, these models did not just learn abstract facts about plumbers—for example, that they wear overalls and carry wrenches. They learned facts about a specific fictional Italian plumber who wears white gloves, blue overalls with yellow buttons, and a red hat with an “M” on the front.
These are not facts about the world that lie beyond the reach of copyright. Rather, the creative choices that define Mario are likely covered by copyrights held by Nintendo.
I sort of already explained this without mentioning this specific example, but I'll make it extra clear.
In the article they prompted the AI for a "video game Italian plumber". What person, if you asked them, to think of an "Italian video game plumber", would not think of Mario? Maybe Luigi? I'll tell you, because there are very damn few famous Italian video game plumbers. The prompt is already locked in on Mario, and even humans make the logical connection to Mario. It might have had billions of images and texts to use, but any time a relation to an "Italian video game plumber" showed up, there's Mario.
So this whole point the article makes about it not learning abstract facts about plumbers, is complete moot because they completely biased the outputs towards receiving what they want to receive. If you ask for just a plumber, for which it does have many, many results. It will make more generalizations and become less specific. Because there are more than 2 examples of plumbers in other types of situations. Humans do this exact same thing in the same task, yet somehow the AI must be infallible to this despite being artificial versions of the biological thing. And that is why analysis is protected, because humans simply cannot stop doing it and everyone is tainted by their knowledge of Mario, even though for whatever reason we might need to use one of the ideas Mario is built upon. And this is why AIs use this same defense. I can say this regardless of the jurisdiction because unless you live in some kind of dictatorship this is generally true.
Sadly, this kind of deceptive framing of AI output is common, particularly among those that are biased against AI. Sometimes it's unintentional, but frequently specific parameters are used that will just generate specific bad results, ignoring that this may not even represent 0.001% of what the model can generate in normal situations.
This is contradictory to how you present it as “taking ideas”.
It is not. You can use the idea of Mario, you cannot use the totality of Mario. For the AI to be able to use the idea of Mario, it will also 'learn' the totality of Mario in the process, as Mario is a collection of ideas that are extracted. But those ideas are stored separately so they can be individually prompted for. You can prompt it to make Mario, because like literally almost every person in society, they know what ideas make up Mario better than I can put to words here. If I hire a human artist to make me a "video game Italian plumber", their first question to me would be "Oh, something like Mario?" and their second response will be "Oh I can't do that, and you should not want to, because you don't own Mario.". Humans use AI, so they need to be the ones to give that second response.
Just like a kitchen knife can be used to stab someone, doesn't mean we produce kitchen knives for stabbing people. Just because an AI can be used to infringe, does not mean that they are produced to infringe. Which is evidence by the vast majority of other ways that it can be used that don't infringe, which is self evident after just tinkering around with it for a little while.
You’re mixing up different things. I’m saying that the image contains infringing material, which is hopefully not something you have to be convinced about. The production of an obviously infringing image, without the infringing elements having been provided in the prompt, is used to show how this information is encoded inside the model in some form. Whether this copyright-protected material exists in some form inside the model is not an equivalent question to whether this is copyright infringement. You are right that the courts have not decided on the latter, but we have been talking about the former. I repeat your position which I was directly responding to before:
If it's anything like the examples before, then the AI has definitely been prompted by the user to make infringing elements.
But anyways, to the question, you just don't seem to grasp that collections of ideas can communicate copyright infringing material without being infringing on their own. It's like arguing that if Paint or Photoshop knows about the color red that this is copyright infringing because it's the same red that Mario uses. None of the ideas that make up Mario are infringing, and cannot be copyrighted. They are what the AI is designed to extract, not Mario as a totality.
You can definitely use AI to make an infringement machine by making it less likely to make leaps in ideas and just only combine the ideas it's been taught on, which we as humans can do as well in the form of plagiarism and forgery. But if you're going to be unethical why use an AI when you might as well just take the easy route directly with print screen or a photo. Two other technologies we didn't ban for having this ability to capture copyrighted material, even if they far more blatantly copy the material.
This is where good AI usage deviates, because it instead tries to MAXIMIZE the amount of leaps and connections the AI makes for as little possibility to make something infringing. Even honest people trying to make new creative works sometimes have to change things because they might be too close to being infringing.
It's hard not to see exactly what the US under Trump is doing with these kinds of hostile actions lately. They're trying to create a situation where Europe and/or others can only reasonably respond back with actions they can brand as hostile in return (That, or bend the knee), so they can play the victim and weaponize that as propaganda to undermine the peaceful and friendly relationships between the common people, a foundation of keeping large scale peace since WW2. Disgusting and utterly sinister behavior. We must resist being played like that and fight back against that urge while remaining defiant of this circus.