this post was submitted on 08 Aug 2024
219 points (83.7% liked)

Unpopular Opinion

6341 readers
31 users here now

Welcome to the Unpopular Opinion community!


How voting works:

Vote the opposite of the norm.


If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.



Guidelines:

Tag your post, if possible (not required)


  • If your post is a "General" unpopular opinion, start the subject with [GENERAL].
  • If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].


Rules:

1. NO POLITICS


Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.


2. Be civil.


Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Shitposts and memes are allowed but...


Only until they prove to be a problem. They can and will be removed at moderator discretion.


5. No trolling.


This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.



Instance-wide rules always apply. https://legal.lemmy.world/tos/

founded 1 year ago
MODERATORS
 

I've recently noticed this opinion seems unpopular, at least on Lemmy.

There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples' works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate "new" content based on probabilities.

My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai

I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people's "likeness." I understand the hate for AI generated shit (because it is shit). I really don't understand where all this hate for using public data for building a "statistical" model to "learn" general patterns is coming from.

I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don't think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that's really just a problem with capitalism, and productivity increases are generally considered good.

top 50 comments
sorted by: hot top controversial new old
[–] jordanlund 58 points 3 months ago (1 children)

Generally the argument isn't public vs. private, it's public domain vs. copyright.

You want to train an LLM using the contents of Project Gutenberg? Great, go for it!

You want to train an LLM using bootlegged epubs stolen from Amazon? Now that's a different deal.

[–] [email protected] 5 points 3 months ago

Sure - they'd need to at least loan the epubs just like a human would need to if wanting to read them.

[–] [email protected] 56 points 3 months ago* (last edited 3 months ago) (2 children)

This falls squarely into the trap of treating corporations as people.

People have a right to public data.

Corporations should continue to be tolerated only while they carefully walk an ever tightening fine line of acceptable behavior.

[–] Keineanung 8 points 3 months ago

Never thought about it like that, that's a really good way of looking at it.

[–] [email protected] 6 points 3 months ago (1 children)

Sure but restricting open source efforts is restricting people.

[–] [email protected] 6 points 3 months ago* (last edited 3 months ago) (2 children)

Yes. Large groups of people acting in concert, with large amounts of funding and influence, must be held to the highest standards, regardless of whether they're doing something I personally value highly.

An individual's rights must be held sacred.

When those two goals are in conflict, we must melt the corporation-in-conflict down for scrap parts, donate all of its intellectual property to the public domain, and try again with forming a new organization with a similar but refined charter.

Shareholders should be, ideally, absolutely fucked by this arrangement, when their corporation fucks up, as an incentive to watch and maintain legal compliance in any companies they hold shares in and influence over.

Investment will still happen, but with more care. We have historically used this model to great innovative success, public good, and lucrative dividends. Some people have forgotten how it can work.

load more comments (2 replies)
[–] [email protected] 40 points 3 months ago (1 children)

Define "public".

Publicly available is not the same as public domain. You should respect the copyright, especially of small creators. I'm of the opinion that an ML model is a derivative work, and so if you've trawled every website under the sun for data to feed your model you've violated copyright.

[–] VoterFrog 1 points 3 months ago (1 children)

There are multiple facets here that all kinda get mashed together when people discuss this topic and the publicly available/public domain difference kinda gets at that.

  • An AI company downloading a publicly available work isn't a violation of copyright law. Copyright gives the owner exclusive right to distribute their work. Publishing it for anybody to download is them exercising that right.
  • Of course, if the work isn't publicly available and the AI company got it, someone probably did violate copyright laws, likely the people who distributed the data set to the company because they're not supposed to be passing around the work without the owner's permission.
  • All that is to say, downloading something isn't making a copy. Sending the work is making a copy, as far as copyright is concerned. Whether the person downloading it is going to use it for something profitable doesn't really change anything there. Only if they were to become the sender at some later point does it matter. In other words, there's no violation of copyright law by the company that can really occur during the whole "training" phase of AI development.
  • Beyond that, AI isn't in the business of serving copies of works. They might come close in some specific instances, but that's largely a technical problem that developers want to fix than a fundamental purpose of these models.
  • The only real case that might work against them is whether or not the works they produce are derivative... But derivative/transformative has a pretty strict legal definition. It's not enough to show that the work was used in the creation of a new work. You can, for example, create a word cloud of your favorite book, analyze the tone of news article to help you trade stocks, or produce an image containing the most prominent color in every frame of a movie. None of these could exist without deriving from a copyrighted work but none of them count as a legally derivative work.
  • I chose those examples because they are basic statistical analyses not far from what AI training involves. There's a lot of aspects of a work that are not covered by copyright. Style, structure, factual information. The kinds of things that AI is mostly interested in replicating.
  • So I don't think we're going to see a lot of success in taking down AI companies with copyright. We might see some small scale success when an AI crosses a line here or there. But unless a judge radically alters the bounds of copyright law, at everyone's detriment, their opponents are going to have an uphill battle to fight here.
[–] CptBread 4 points 3 months ago (2 children)

An AI model could be seen as an efficient but lossy compression scheme, especially when it comes to images... And a compressed jpeg of an image is still seen as a copy so why would an AI model trained on reproducing it be different?

[–] BluesF 3 points 3 months ago

Are you suggesting that the model itself is a compressed version of its training data? I think it requires some stretches of how training works to accept that.

[–] [email protected] 2 points 3 months ago

It depends on how much you compress the jpeg. If it gets compressed down to 4 pixels, it cannot be seen as infringement. Technically, the word cloud is lossy compression too: it has all of the information of the text, but none of the structure. I think it depends largely on how well you can reconstruct the original from the data. A word cloud, for instance, cannot be used to reconstruct the original. Nor can a compressed jpeg, ofc; that’s the definition of lossy. But most of the information is still there, so a casual observer can quickly glean the gist of the image. There is a line somewhere between finding the average color of a work (compression down to one pixel) and jpeg compression levels.

Is the line where the main idea of the work becomes obscured? Surely not, since a summary hardly infringes on the copyright of a book. I don’t know where this line should be drawn (personally, I feel very Stallman-esque about copyright: IP is not a coherent concept), but if we want to put rules on these things, we need to well-define them, which requires venturing into the domain of information theory (what percentage of the entropy in the original is part of the redistributed work, for example), but I don’t know how realistic that is in the context of law.

[–] NateNate60 30 points 3 months ago (2 children)

This is not an opinion. You have made a statement of fact. And you are wrong.

At law, something being publicly available does not mean it is allowed to be used for any purpose. Copyright law still applies. In most countries, making something publicly available does not cause all copyrights to be disclaimed on it. You are still not permitted to, for example, repost it elsewhere without the copyright holder's permission, or, as some courts have ruled, use it to train an AI that then creates derivative works. Derivative works are not permitted without the copyright holder's permission. Courts have ruled that this could mean everything an AI generates is a derivative work of everything in its training data and, therefore, copyright infringement.

[–] [email protected] 24 points 3 months ago (3 children)

Saying that statistical analysis is derivative work is a massive stretch. Generative AI is just a way of representing statistical data. It’s not particularly informative or useful (it may be subject to random noise to create something new, for example), but calling it a derivative work in the same way that fan-fiction is derivative is disingenuous at best.

[–] the_toast_is_gone 6 points 3 months ago

Wouldn't that argument be like saying an image I drew of a copyrighted character is just an arrangement of pixels based on existing data? The fact remains that, if I tell an AI to generate an image of a copyrighted character, then it'll produce something without the permission of the original artist.

I suppose then the problem becomes, who do you hold responsible for the copyright violation (if you pursue that course of action)? Do you go after the guy who told the AI to do it, or do the people who trained the AI and published it? Possibly both? On one hand, suing the AI AL company would be like suing Adobe because they don't stop people from drawing copyrighted materials in their software (yet). On the other hand, they did create this software that basically acts in the place of an artist that draws whatever you want for commission. If that artist was drawing copyrighted characters for money, you could make the case that the AI company is doing the same - manufacturing copyrighted character images by feeding the AI images of the character and allowing people to generate images of it while collecting money for their services.

All this to say, copyright is stupid.

load more comments (2 replies)
[–] [email protected] 17 points 3 months ago (1 children)

They have indeed made a statement of fact. But to the best of my knowledge it's not one that's got any definite controlling precedent in law.

You are still not permitted to, for example, repost it elsewhere without the copyright holder's permission

That's the thing. It's not clear that an LLM does "repost it elsewhere". As the OP said, the model itself is basically just a mathematical construct that can't really be turned back into the original work, which is possibly a sign that it's not a derivative work, but a transformative one, which is much more likely to be given Fair Use protection. Though Fair Use is always a question mark and you never really know if a use is Fair without going to court.

You could be right here. Or OP could. As far as I'm concerned anyone claiming to know either way is talking out of their arse.

load more comments (1 replies)
[–] [email protected] 22 points 3 months ago (1 children)

I don’t have a problem with tech companies doing statistics on publicly available data, I have a problem with them getting rich by charging money for the collective creative works of humanity. But if they want to share their work for free, I have no issue with that.

[–] [email protected] 6 points 3 months ago

Yeah, because corporations never make money off things they make available free of charge. There's no way this could go wrong.

[–] [email protected] 19 points 3 months ago (1 children)

For personal or public use, I'm fine with it. If you use it to make money, that's when I get upsetti spaghetti.

[–] stoicmaverick 12 points 3 months ago (2 children)

Ok. Devil's Advocate: how is a software engineer profiting from his AI model different from an artist who leans to draw by mimicking the style of public works? Asking for a friend.

[–] [email protected] 5 points 3 months ago

Good question!

First, that artist will only learn from a few handful of artists instead of every artist's entire field of work all at the same time. They will also eventually develop their own unique style and voice--the art they make will reflect their own views in some fashion, instead of being a poor facsimile of someone else's work.

Second, mimicking the style of other artists is a generally poor way of learning how to draw. Just leaping straight into mimicry doesn't really teach you any of the fundamentals like perspective, color theory, shading, anatomy, etc. Mimicking an artist that draws lots of side profiles of animals in neutral lighting might teach you how to draw a side profile of a rabbit, but you'll be fucked the instant you try to draw that same rabbit from the front, or if you want to draw a rabbit at sunset. There's a reason why artists do so many drawings of random shit like cones casting a shadow, or a mannequin doll doing a ballet pose, and it ain't because they find the subject interesting.

Third, an artist spends anywhere from dozens to hundreds of hours practicing. Even if someone sets out expressly to mimic someone else's style, teaches themselves the fundamentals, it's still months and years of hard work and practice, and a constant cycle of self-improvement, critique, and study. This applies to every artist, regardless of how naturally talented or gifted they are.

Fourth, there's a sort of natural bottleneck in how much art that artist can produce. The quality of a given piece of art scales roughly linearly with the time the artist spends on it, and even artists that specialize in speed painting can only produce maybe a dozen pieces of art a day, and that kind of pace is simply not sustainable for any length of time. So even in the least charitable scenario, where a hypothetical person explicitly sets out to mimic a popular artist's style in order to leech off their success, it's extremely difficult for the mimic to produce enough output to truly threaten their victim's livelihood. In comparison, an AI can churn out dozens or hundreds of images in a day, easily drowning out the artist's output.

And one last, very important point: artists who trace other people's artwork and upload the traced art as their own are almost universally reviled in the art community. Getting caught tracing art is an almost guaranteed way to get yourself blacklisted from every art community and banned from every major art website I know of, especially if you're claiming it's your own original work. The only way it's even mildly acceptable is if the tracer explicitly says "this is traced artwork for practice, here's a link to the original piece, the artist gave full permission for me to post this." Every other creative community writing and music takes a similarly dim views of plagiarism, though it's much harder to prove outright than with art. Given this, why should the art community treat someone differently just because they laundered their plagiarism with some vector multiplication?

[–] [email protected] 4 points 3 months ago

Good question.

Ok, so let's say the artist does exactly what the AI does, in that they don't try to do anything unique, just looking around at existing content and trying to mix and mash existing ideas. No developing of their own style, no curiosity of art history, no humanity, nothing. In this case I would say that they are mechanically doing the exact same thing as an AI is doing. Do I think I they should get payed. Yes! They spent a good chunk of their life developing this skill, they are a human, they deserve to get their basic needs met and not die of hunger or exposure. Now, this is a strange case because 99.99% of artists don't do this. Most develop a unique style and add life experience in their art to generate something new.

A Software Engineer can profit off their AI model by selling it. If they are make money by generating images, then they are making money off of hard working artists that should be payed for their work. That isn't great. The outcome of allowing this is that art will no longer be something you can do to make a living. This is bad for society.

It also should be noted that a Software Engineer making an AI model from scratch is 0.01% of the AIs being used. Most people, lay people, who have spent very little time developing art or Software Engineering skills can easily use an existing model to create "art". The result of this is that many talented artists that could bring new and interesting ideas to world are being out competed by one guy with a web browser producing sub-par sloppy work.

[–] Treczoks 18 points 3 months ago (1 children)

It would be nice if the AI industry had one big positive effect by finally reigning in the overboarding copyright laws.

[–] admiralpatrick 9 points 3 months ago (1 children)

If that were to happen, it'd only be for tech companies, not people. lol.

[–] Treczoks 1 points 3 months ago

That might actually happen, yes.

[–] CluckN 7 points 3 months ago

“They should pay their sources!”

Source is 600GB of raw copied website data mixed in a giant witches cauldron

[–] [email protected] 6 points 3 months ago

if they're using creative commons licenses (or other sharing licenses) then it's fine! but the model is then alsp bound by the same licenses because that's how licenses work

[–] WraithGear 5 points 3 months ago (1 children)

I can agree, but any output must be instantly public domain.

load more comments (1 replies)
[–] [email protected] 4 points 3 months ago* (last edited 3 months ago) (2 children)

Here’s an analogy that can be used to test this idea.

Let’s say I want to write a book but I totally suck as an author and I have no idea how to write a good one. To get some guidelines and inspiration, I go to the library and read a bunch of books. Then, I’ll take those ideas and smash them together to produce a mediocre book that anyone would refuse to publish. Anyway, I could also buy those books, but the end result would still be the same, except that it would cost me a lot more. Either way, this sort of learning and writing procedure is entirely legal, and people have been doing this for ages. Even if my book looks and feels a lot like LOTR, it probably won’t be that easy to sue me unless I copy large parts of it word for word. Blatant plagiarism might result in a lawsuit, but I guess this isn’t what the AI training data debate is all about, now is it?

However, if I pirated those books, that could result in some trouble. However, someone would need to read my miserable book, find a suspicious passage, check my personal bookshelf and everything I have ever borrowed etc. That way, it might be possible to prove that I could not have come up with a specific line of text except by pirating some book. If an AI is trained on pirated data, that’s obviously something worth the debate.

[–] [email protected] 7 points 3 months ago (2 children)

You are equating traing an LLM with a person learning, but an LLM is not a person. It is not given the same rights and privileges under the law. At best it is a computer program and you can certainly infringe copyright by writing a program.

[–] Specal 5 points 3 months ago

It's not "At best it's a computer program". It is a computer program, a program of probability that it's response should be X. The training data could be stolen, but it's output isn't.

[–] [email protected] 2 points 3 months ago (1 children)

An LLM is not a legal entity, nor should it be. However, similar things happen in a human brain and the network of an LLM, so same laws could be applicable to some extent. Where do we draw the line? That’s a legal/political issue we haven’t figured out yet, but following these developments is going to be interesting.

[–] [email protected] 3 points 3 months ago (4 children)

Agreed it hasn't been settled legally yet.

I also agree that an LLM isn't and shouldn't be a legal entity. Therefore an LLM is something that can be owned, sold, and a profit made from.

It is my opinion that the original author of the works should receive compensation when their work is used to make profit i.e. to make the LLM. I'd also say that the original intent of copyright law was to give authors protection from others making money from their work without permission.

Maybe current copyright law isn't up to the job here, but benefiting of the back of others creative works is not socially acceptable in my opinion.

load more comments (4 replies)
[–] [email protected] 3 points 3 months ago* (last edited 3 months ago) (1 children)

To expand on what you wrote, I’d equate the LLM output as similar to me reading a book. From here on out until I become senile, the book is part of memory. I may reference it, I may parrot some of its details that I can remember to a friend. My own conversational style and future works may even be impacted by it, perhaps even subconsciously.

In other words, it’s not as if a book enters my brain and then is completely gone once I’m finished reading it.

So I suppose then, that the question is moreso one of volume. How many works consumed are considered too many? At what point do we shift from the realm of research to the one of profiteering?

There are a certain subset of people in the AI field who believe that our brains are biological forms of LLMs, and that, if we feed an electronic LLM enough data, it’ll essentially become sentient. That may be for better or worse to civilization, but I’m not one to get in the way of wonder building.

[–] [email protected] 3 points 3 months ago

A neural network (the machine learning technology) aims to imitate the function to normal neurons in a human brain. If you have lots of these neurons, all sorts of interesting phenomena begin to emerge, and consciousness might be one of them. If/when we get to that point, we’ll also have to address several of legal and philosophical questions. It’s going to be a wild ride.

[–] [email protected] 4 points 3 months ago (1 children)

"Statistiac" of course. And yes I would

[–] [email protected] 4 points 3 months ago (1 children)
load more comments (1 replies)
[–] Waldowal 3 points 3 months ago

Agree for these reasons:

  • Legally: It's always been legal (in the US at least) to relay the ideas in a copywrited work. AI might need to get better at providing a bibliography, but that's likely a courtesy more than a legal requirement.

  • Culturally: Access to knowledge should be free. It's one of the reasons public libraries exist. If AI can help people gain knowledge more quickly and completely, it's just the next evolution of the same idea.

  • Also Culturally: Think about what's out on the internet. Millions of recipes, no doubt copied from someone else, with pages of bullshit about how the author "grew up on a farm that produced Mohitos". For decades now, "content creators" have gotten paid for millions of low quality bullshit click bait articles. There's that. Most of the real "knowledge" on the internet is freely accessible technical / product documentation, forum posts like StackOverflow, and scientific studies. All of it is stuff the authors would probably love to have out there and freely accessible. Sure, some accidental copywrite infringement might happen here and there, but I think it's a tiny problem in relation to the value that AI might bring society.

[–] [email protected] 2 points 3 months ago

Huh I read your headline in a sarcastic tone so was totally ready to argue with you. But I agree. Not sure if it's an unpopular opinion though.

[–] Xeroxchasechase 2 points 3 months ago* (last edited 3 months ago) (6 children)

As long as it's licensed as Creative Common of some sort. Copyrighted materials are copyrighted and shouldn't be used without concent , this protect also individuals not only corporations. (Excuse my English)

Edit: Your argument about probability and parameter size is inapplicable in my mind. The same can be said about jpeg lossy compression.

[–] [email protected] 7 points 3 months ago

Creative Commons would not actually help here. Even the most permissive licence, CC-BY, requires attribution. If using material for training material requires a copyright licence (which is certainly not a settled question of law), CC would likely be just the same as all rights reserved.

(There's also CC-0, but that's basically public domain, or as near to it as an artist is legally allowed to do in their locale. So it's basically not a Creative Commons licence.)

load more comments (5 replies)
[–] [email protected] 1 points 3 months ago (1 children)

The output of a LLM is analogous to re-saving an image as a lo res JPEG. Data is being processed and altered using statistics, but nothing "new" is being created, only lower quality derivatives. That's why you can't train a LLM on the output of a LLM.

[–] [email protected] 2 points 3 months ago

This is actually a decent argument, but there has to be a threshold. For instance, if I take the average of all RGB values in an image, and distribute a pixel with the average, is that breaking copyright or somehow immoral?

I recently looked into the speculated model-size and speculated training set size of GPT and Stable Diffusion, and it does appear that if you thought of them as compression algorithms, they'd only be doing something like 1:7 compression. These ratios aren't outlandish for lossy compression.

Compression and redistribution isn't the (stated) goal of these models. Hypothetically, these models are learning patterns and associations of things like styles and how humans write text. And they appear to do things a little beyond just copying and pasting. So, hypothetically, a lot of the model size could mostly consist of learned styles and human preferences, rather than just a compressed database of the images it was trained on. I guess the real test is trying to prompt the models to reproduce an item in its training set, and evaluating how similar it is.

load more comments
view more: next ›