this post was submitted on 08 Aug 2024
219 points (83.7% liked)

Unpopular Opinion

6304 readers
313 users here now

Welcome to the Unpopular Opinion community!


How voting works:

Vote the opposite of the norm.


If you agree that the opinion is unpopular give it an arrow up. If it's something that's widely accepted, give it an arrow down.



Guidelines:

Tag your post, if possible (not required)


  • If your post is a "General" unpopular opinion, start the subject with [GENERAL].
  • If it is a Lemmy-specific unpopular opinion, start it with [LEMMY].


Rules:

1. NO POLITICS


Politics is everywhere. Let's make this about [general] and [lemmy] - specific topics, and keep politics out of it.


2. Be civil.


Disagreements happen, but that doesn’t provide the right to personally attack others. No racism/sexism/bigotry. Please also refrain from gatekeeping others' opinions.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Shitposts and memes are allowed but...


Only until they prove to be a problem. They can and will be removed at moderator discretion.


5. No trolling.


This shouldn't need an explanation. If your post or comment is made just to get a rise with no real value, it will be removed. You do this too often, you will get a vacation to touch grass, away from this community for 1 or more days. Repeat offenses will result in a perma-ban.



Instance-wide rules always apply. https://legal.lemmy.world/tos/

founded 1 year ago
MODERATORS
 

I've recently noticed this opinion seems unpopular, at least on Lemmy.

There is nothing wrong with downloading public data and doing statistical analysis on it, which is pretty much what these ML models do. They are not redistributing other peoples' works (well, sometimes they do, unintentionally, and safeguards to prevent this are usually built-in). The training data is generally much, much larger than the model sizes, so it is generally not possible for the models to reconstruct random specific works. They are not creating derivative works, in the legal sense, because they do not copy and modify the original works; they generate "new" content based on probabilities.

My opinion on the subject is pretty much in agreement with this document from the EFF: https://www.eff.org/document/eff-two-pager-ai

I understand the hate for companies using data you would reasonably expect would be private. I understand hate for purposely over-fitting the model on data to reproduce people's "likeness." I understand the hate for AI generated shit (because it is shit). I really don't understand where all this hate for using public data for building a "statistical" model to "learn" general patterns is coming from.

I can also understand the anxiety people may feel, if they believe all the AI hype, that it will eliminate jobs. I don't think AI is going to be able to directly replace people any time soon. It will probably improve productivity (with stuff like background-removers, better autocomplete, etc), which might eliminate some jobs, but that's really just a problem with capitalism, and productivity increases are generally considered good.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 6 points 3 months ago (1 children)

Sure but restricting open source efforts is restricting people.

[–] [email protected] 6 points 3 months ago* (last edited 3 months ago) (1 children)

Yes. Large groups of people acting in concert, with large amounts of funding and influence, must be held to the highest standards, regardless of whether they're doing something I personally value highly.

An individual's rights must be held sacred.

When those two goals are in conflict, we must melt the corporation-in-conflict down for scrap parts, donate all of its intellectual property to the public domain, and try again with forming a new organization with a similar but refined charter.

Shareholders should be, ideally, absolutely fucked by this arrangement, when their corporation fucks up, as an incentive to watch and maintain legal compliance in any companies they hold shares in and influence over.

Investment will still happen, but with more care. We have historically used this model to great innovative success, public good, and lucrative dividends. Some people have forgotten how it can work.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago) (1 children)

I think they are saying that preventing open source models being trained and released prevents people from using them. Trying to make training these models more difficult doesn't just affect businesses, it affects individuals too. Essentially you have all been trying to stand in the way of progress, probably because of fears over job security. It's not really different to being a luddite.

[–] [email protected] 1 points 3 months ago* (last edited 3 months ago)

Essentially you have all been trying to stand in the way of progress,

Fuck progress from anyone who can't be bothered to do it right. There's justified risks where the cost of inaction is just as horrible as action. This isn't that, and everyone saying it is, is an asshole whose shouting about this we would all be better off without.

This work can be done correctly, and even reasonably quickly. Shortcuts aren't merited.

probably because of fears over job security. It's not really different to being a luddite.

My job is secure. I have substantially more than typical expertise in language models.

The emperor, today, is butt naked. Anyone telling you we are about to see fast new progress is full of shit, and isn't your friend.

I've seen this before, and I'll see it again.

I've given a polite warning, where it looked like folks might listen. The rest aren't my problem.