this post was submitted on 01 Sep 2023
192 points (93.6% liked)

Technology

59418 readers
2817 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Visual artists fight back against AI companies for repurposing their work::Three visual artists are suing artificial intelligence image-generators to protect their copyrights and careers.

you are viewing a single comment's thread
view the rest of the comments
[–] kava 9 points 1 year ago (1 children)

I don't think it's obvious at all. Both legally speaking - there is no consensus around this issue - and ethically speaking because AIs fundamentally function the same way humans do.

We take in input, some of which is bound to be copyrighted work, and we mesh them all together to create new things. This is essentially how art works. Someone's "style" cannot be copyrighted, only specific works.

The government announced an inquiry recently into the copyright questions surrounding AI. They are going to make recommendations to congress about potential legislation, if any, they think would be a good idea. I believe there's a period of public comment until mid October, if anyone wants to write a comment.

[–] [email protected] 7 points 1 year ago* (last edited 1 year ago) (3 children)

I really hope you're wrong.

And I think there's a difference. Humans can draw stuff, build structures, and make tools, in a way that improves upon the previous iteration. Each artists adds something, or combines things in a way that makes for something greater.

AI art, literally cannot do anything, without human training data. It can't take a previous result, be inspired by it, and make it better. There has to be actual human input, it can't train itself on its own data, the way humans do. It absolutely does not "work the same way".

AI art has NEVER made me feel like it's greater than the sum of its parts. Unlike art made by humans, which makes me feel that way all the time.

If a human does art without input, you still get "something".

With an AI, you don't have that. Without the training data, you have nothing.

[–] kava 6 points 1 year ago* (last edited 1 year ago) (1 children)

If a human does art without input, you still get “something”.

Ok, take a human being that has never had any other interactions with any other human and has never consumed any content created by humans. Give him finger paint and have him paint something on a blank canvas. I think it wouldn't look any different than a chimpanzee doing finger paint.

it can’t train itself on its own data

In theory, it could. You would just need a way to quantify the "fitness" of a drawing. They do this by comparing to actual content. But you don't need actual content in some circumstances. For example, look at Alphazero - Deepmind's AI from a few years back for playing chess. All the AI knew was the rules of the game. It did not have access to any database of games. No data. The way it learned is it played millions of games against itself.

It trained itself on its own data. And that AI, at the time, beat the leading chess engine that has access to databases and other pre-built algorithms.

With art this gets trickier because art is subjective. You can quantify clearly whether you won or lost a chess game. How do you quantify if something is a good piece of art? If we can somehow quantify this, you could in theory create AI that generates art with no input.

We're in the infancy stages of this technology.

Humans can draw stuff, build structures, and make tools, in a way that improves upon the previous iteration. Each artists adds something, or combines things in a way that makes for something greater.

AI can do all of the same. I know it's scary but it's here and it isn't going away. AI designed systems are becoming more and more commonplace. Solar panels, medical devices, computer hardware, aircraft wings, potential drug compounds, etc. Certain things AI can be really good at, and designing things and testing it in a million different simulations is something that AI can do a lot better than humans.

AI art has NEVER made me feel like it’s greater than the sum of its parts

What is art? If I make something that means nothing and you find a meaning in it, is it meaningful? AI is a cold calculated mathematical model that produces meaningless output. But humans love finding patterns in noise.

Trust me, you will eventually see some sort of AI art that makes an impact on you. Math doesn't lie. If statistics can turn art into data and find the hidden patterns that make something impactful, then it can recreate it in a way that is impactful.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

The randomness used by current machine learning to train the neural networks, will never be able to do what a human does when they are being creative.

I have no doubt AI art will be able "say" things. But it wont be saying things, that haven't already been said.

And yes, AI can brute force its way to solutions in ways humans cannot beat. But that only works when there is a solution. So AI works with science, engineering, chess.

Art does not have a "solution". Every answer is valid. Humans are able to create good art, because they understand the question. "What is it to be human?" "Why are we here?" "What is adulthood?" "Why do I feel this?" "What is innocence?"

AI does not understand anything. All it is doing is mimicking art already created by humans, and coincidentally sometimes getting it right.

[–] kava 5 points 1 year ago (3 children)

AI can brute force its way to solutions in ways humans cannot beat

It's not brute force. It seems like brute force because trying something millions of times seems impossible to us. But they identify patterns and then use those patterns to create output. It's learning. It's why we call it "machine learning". The mechanics are different than how humans do it, but fundamentally it's the same.

The only reason you know what a tree looks like is because you've seen a million different trees. Trees in person, trees in movies, trees in cartoons, trees in drawings, etc. Your brain has taken all of these different trees and merged them together in your brain to create an "ideal" of the tree. Sort of like Plato's "world of forms"

AI can recognize a tree through the same process. It views millions of trees and creates an "ideal" tree. It can then compare any image it sees against this ideal and determine the probability that it is or isn't a tree. Combine this with something that randomly pumps out images and you can now compare these generated images with the internal model of a tree and all of a sudden you have an AI that can create novel images of trees.

It's fundamentally the same thing we do. It's creating pictures of trees that didn't exist before. The only difference is it happens in a statistical model and it happens at a larger and faster scale than humans are capable of.

This is why the question of AI models having to pay copyright for content it parses is not obvious at all.

Art does not have a “solution”. Every answer is valid.

If every answer is valid then you would be sitting here saying that AI art is just as valid as anything else.

load more comments (3 replies)
[–] Grimy 2 points 1 year ago* (last edited 1 year ago)

I think it's a mistake to see the software as an independent entity. It's a tool just like the paintbrush or photoshop. So yes, there isn't any AI art without the human but that's true for every single art form.

The best art is a mix of different techniques and skills. Many digital artists are implementing ai into their workflow and there is definitely depth to what they are making.

[–] FooBarrington 0 points 1 year ago (1 children)

It can't take a previous result, be inspired by it, and make it better.

Why do you think so? AI art can take an image and change it in creative ways, just as humans can.

There has to be actual human input, it can't train itself on its own data, the way humans do.

Only an incredibly small amount of humans ever "trained itself" without relying on previous human data. Anyone who has ever seen any piece of artwork wouldn't qualify.

AI art has NEVER made me feel like it's greater than the sum of its parts.

Art is subjective. I've seen great and interesting AI art, and I've seen boring and uninspired human art.

If a human does art without input, you still get "something".

Really? Do you have an example for someone who is deaf, blind, mute and can't feel touch, who became an artist? Because all of those are inputs all humans have since birth.

[–] [email protected] 0 points 1 year ago (22 children)

I'm talking from a perspective of understanding how machine learning networks work.

They cannot make something new. By nature, they can only mimic.

The randomness they use to combine different pieces of work, is not creativeness. It's brute force. It's doing the math a million times until it looks right.

Humans fundamentally do not work that way. When an engineer sees a design, and thinks "I can improve that" they are doing so because they understand the mechanism.

Modern AIs do not understand anything. They brute force their way to valid output, and in some cases, like with code, science, or an engineering problem, there might be one single best solution, which an AI can find faster than a human.

But art, DOES NOT HAVE a single correct "solution".

[–] lunarul 3 points 1 year ago (1 children)

AI is supposed to work with human input. AI is a tool for the artist, not a replacement of the artist. The human artist is the one calling the shots, deciding when the final result is good or when it needs improvement.

[–] [email protected] 1 points 1 year ago (1 children)

Absolutely.

Yet a lot of people are sharpening their knives in preparation to cut the artist out of the process.

[–] lunarul 1 points 1 year ago

And the difference in results is clearly different. There are people who replaced artists with Photoshop, there are people who replaced artists with AI, and each new tool with firther empower people to try things on their own. If those results are good enough for them then they probably wouldn't have paid for a good artist anyway.

[–] FooBarrington 1 points 1 year ago (1 children)

They cannot make something new. By nature, they can only mimic.

Explain it to me from a mathematical point of view. How can I know based on the structure of GANs or Transformers that they, by nature, can only mimic? Please explain it mathematically, since you're referring to their nature.

The randomness they use to combine different pieces of work, is not creativeness. It's brute force. It's doing the math a million times until it looks right.

This betrays a lack of understanding on your part. What is the difference between creativeness and brute force? The rate of acceptable navigations in the latent space. Transformers and GANs do not brute force in any capacity. Where do you get the idea that they generate millions of variations until they get it right?

Humans fundamentally do not work that way. When an engineer sees a design, and thinks "I can improve that" they are doing so because they understand the mechanism.

Define understanding for me. AI can, for example, automatically optimise algorithms (it's a fascinating field, finding a more efficient implementation without changing results). This should be impossible if you're correct. Why does it work? Why can they optimise without understanding, and why can't this be used in other areas?

Modern AIs do not understand anything. They brute force their way to valid output, and in some cases, like with code, science, or an engineering problem, there might be one single best solution, which an AI can find faster than a human.

Again, define understanding. They provably build internal models depending on the task you're training. How is that not a form of understanding?

But art, DOES NOT HAVE a single correct "solution".

Then it seems great that an AI doesn't always give the same result for the same input, no?

[–] [email protected] 0 points 1 year ago (1 children)

The brute forcing doesn't happen when you generate the art. It happens when you train the model.

You fiddle with the numbers until it produces only results that "look right". That doesn't make it not brute forcing.

Human inspiration and creativity meanwhile is an intuitive process. And we understand why 2+2 is four.

Writing a piece of code that takes two values and sums them, does not mean the code comprehends math.

In the same way, training a model to generate sound or visuals, does not mean it understands the human experience.

As for current models generating different result for the same prompt... no. They don't. They generate variations, but the same prompt won't get you Dalí in one iteration, then Monet in the next.

[–] FooBarrington 1 points 1 year ago* (last edited 1 year ago) (1 children)

The brute forcing doesn't happen when you generate the art. It happens when you train the model.

So it's the same as a human - they also generate art until they get something that "looks right" during training. How is it different when an AI does it?

But you'll have to explain where this brute forcing happens. What are the inputs and outputs of the process? Because the NN doesn't generate all possible outputs until the correct one is found, which is what brute forcing is. Maybe you could argue that GANs are kinda doing this, but it's still a very much directed process, which is entirely different from real brute forcing.

Human inspiration and creativity meanwhile is an intuitive process. And we understand why 2+2 is four.

You're using more words without defining them.

Writing a piece of code that takes two values and sums them, does not mean the code comprehends math.

But we're not writing code to generate art. We're writing code to train a model to generate art. As I've already mentioned, NNs provably can build an accurate model of whatever you're training - how is this not a form of comprehension?

In the same way, training a model to generate sound or visuals, does not mean it understands the human experience.

Please prove you need to understand the human experience to be able to generate meaningful art.

As for current models generating different result for the same prompt... no. They don't. They generate variations, but the same prompt won't get you Dalí in one iteration, then Monet in the next.

Of course they can, depending on your prompt and temperature.

[–] [email protected] 0 points 1 year ago (1 children)

You are drawing parallels where I don't think there are any, and are asking me to prove things I consider self-evident.

I'm no longer interested in elaborating, and I don't think you'd understand me if I did.

[–] FooBarrington 1 points 1 year ago (1 children)

This is what it always comes down to - you have this fuzzy feeling that AI art is not real art, but the deeper you dig, the harder it gets to draw a real distinction. This is because your arguments aren't rooted in actual definitions, so instead of clearly explaining the difference between A and B, you handwave it away due to C, which you also don't explain.

I once held positions similar to yours, but after analysing the topic much much deeper I arrived at my current positions. I can clearly answer all the questions I posed to you. You should consider whether you not being able to means anything regarding your own position.

[–] [email protected] 0 points 1 year ago (1 children)

I am able to answer your questions for myself. I have lost interest in doing so for you.

[–] FooBarrington 2 points 1 year ago (1 children)

But can you do so from the ground up, without handwaving towards the next unexplained reason? That's what you've done here so far.

[–] [email protected] 1 points 1 year ago (1 children)

Yes.

I once held a view similar to the one you present now. I would consider my current opinion further advanced, like you do yours.

You ask for elaboration and verbal definitions, I've been concise because I do not wish to spend time on this.

It is clear we cannot proceed further without me doing so. I have decided I won't.

[–] FooBarrington 2 points 1 year ago (1 children)

Bummer. You could have been the first to bring actual argument for your position :)

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

Not today. I have too much else to do.

And it's not like my being concise makes my argument absent.

[–] FooBarrington 2 points 1 year ago (1 children)

The issue isn't you being concise, it's throwing around words that don't have a clear definition, and expecting your definition to be broadly shared. You keep referring to understanding, and yet objective evidence towards understanding is only met with "but it's not creative".

[–] [email protected] 1 points 1 year ago (1 children)

Are you suggesting there is valid evidence modern ML models are capable of understanding?

I don't see how that could be true for any definition of the word.

[–] FooBarrington 2 points 1 year ago* (last edited 1 year ago) (1 children)

As I've shared 3 times already: Yes, there is valid evidence that modern ML models are capable of understanding. Why do I have to repeat it a fourth time?

I don’t see how that could be true for any definition of the word.

Then explain to me how it isn't true given the evidence:

Language models show a surprising range of capabilities, but the source of their apparent competence is unclear. Do these networks just memorize a collection of surface statistics, or do they rely on internal representations of the process that generates the sequences they see? We investigate this question by applying a variant of the GPT model to the task of predicting legal moves in a simple board game, Othello. Although the network has no a priori knowledge of the game or its rules, we uncover evidence of an emergent nonlinear internal representation of the board state. Interventional experiments indicate this representation can be used to control the output of the network and create "latent saliency maps" that can help explain predictions in human terms.

https://arxiv.org/abs/2210.13382

I don't see how an emergent nonlinear internal representation of the board state is anything besides "understanding" it.

[–] [email protected] 1 points 1 year ago* (last edited 1 year ago) (1 children)

Cool. But this is still stuff that has a "right" answer. Math. Math in the form of game rules, but still math.

I have seen no evidence that MLs can comprehend the abstract. To know, or more accurately, model, the human experience. It's not even clear, that given a conscious entity, it is possible to communicate about being human to something non-human.

I am amazed, but not surprised, that you can explain a "system" to an LLM. However, doing the same for a concept, or human emotion, is not something I think is possible.

[–] FooBarrington 2 points 1 year ago (1 children)

Cool. But this is still stuff that has a “right” answer.

What are you talking about? You wanted evidence that NNs can understand stuff, I showed you evidence.

Math. Math in the form of game rules, but still math.

Yes, and math can represent whatever you want. It can represent language, it can represent physics, it can even represent a human brain. Don't assume we are more than incredibly complicated machines. If you want to argue "it's just math", then show me that anything isn't just math.

I have seen no evidence that MLs can comprehend the abstract. To know, or more accurately, model, the human experience. It’s not even clear, that given a conscious entity, it is possible to communicate about being human to something non-human.

See? And that's the handwaving. You're talking about "the human experience" as if that's a thing with an actual definition. Why is "the human experience" relevant to whether NNs can understand things?

I am amazed, but not surprised, that you can explain a “system” to an LLM. However, doing the same for a concept, is not something I think is possible.

And the next handwave - what is a concept? How is "the board in Othello" not a concept?

[–] [email protected] 1 points 1 year ago (1 children)

Modern MLs are nowhere near complex enough to model reality to the extent required for genuine artistic expression.

That you need me to say this using an essay instead of a sentence, is your problem, not mine.

[–] FooBarrington 2 points 1 year ago (1 children)

Modern MLs are nowhere near complex enough to model reality to the extent required for genuine artistic expression.

You'd have to bring up actual evidence for this. Easiest would be to start by defining "genuine artistic expression". But I have a feeling you'll just resort to the next handwave...

Thank you for confirming that your position doesn't make any sense.

[–] [email protected] 1 points 1 year ago

Thank you for confirming that your position doesn't make any sense.

Rude. Thanks for confirming my choice on minimizing the effort I spend on you, I guess.

load more comments (20 replies)