this post was submitted on 03 Sep 2024
1581 points (97.8% liked)

Technology

59174 readers
2728 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
(page 4) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 10 points 2 months ago

“Too fucking bad”

[–] Fedditor385 10 points 2 months ago

Idk, usually people shut down their business if it can't make a profit...

[–] Argyle13 9 points 2 months ago

Sorry not sorry. Found another company that does not need to rob people and other companies to make money. Also: breaking the law should make this kind of people face grim consequences. But nothing will happen.

[–] [email protected] 9 points 2 months ago* (last edited 2 months ago) (3 children)

The internet has been primarily derivative content for a long time. As much as some haven't wanted to admit it. It's true. These fancy algorithms now take it to the exponential factor.

Original content had already become sparsely seen anymore as monetization ramped up. And then this generation of AI algorithms arrived.

The several years before prior to LLMs becoming a thing, the internet was basically just regurgitating data from API calls or scraping someone else's content and representing it in your own way.

load more comments (3 replies)
[–] [email protected] 9 points 2 months ago (5 children)

Oh how quick people are to jump on the side of copyright and IP.

load more comments (5 replies)
[–] [email protected] 9 points 2 months ago

If they win, we can just train a CNN on a single 4k hdr movie until it's extremely fitted, and then it's legal to redistribute

[–] [email protected] 9 points 2 months ago* (last edited 2 months ago)

Unregulated areas lead to these type of business practices where the people will squeeze out the juices of these opportunities. The cost of these activities will be passed on the taxpayers.

[–] UncleGrandPa 9 points 2 months ago

Ok... Is that supposed to be a good reason?

[–] aesthelete 9 points 2 months ago

I maintain my insistence that you owe me a business model!

[–] menemen 9 points 2 months ago

"I loose money when I pay for Netflix."

[–] [email protected] 8 points 2 months ago
[–] RangerJosie 8 points 2 months ago

Then go out of business.

Literally, "fuck you go die" situation.

[–] [email protected] 7 points 2 months ago (1 children)

Y'all have the wrong take. Fuck copyright.

load more comments (1 replies)
[–] [email protected] 7 points 2 months ago* (last edited 2 months ago) (3 children)

I feel we need a term for "copyright bros".

The more important point is that social media companies can claim to OWN all the content needed to train AI. Same for image sites. That means they get to own the AI models. That means the models will never be free. Which means they control the "means of generation". That means that forever and ever and ever most human labour will be worth nothing while we can't even legally use this power. Double fucked.

YOU the user/product will not gain anything with this copyright strongmanning.

And to the argument itself: Just because AI is better at learning from existing works, faster, more complete, better memory, doesn't meant that it's fundamentally different than humans learning from artwork. Almost EVERY artist arguing for this is stealing themselves since they learned and was inspired by existing works.

But I guess the worst possible outcome is inevitable now.

load more comments (3 replies)
[–] [email protected] 7 points 2 months ago (1 children)

As written the headline is pretty bad, but it seems their argument is that they should be able to train from publicly available copywritten information, like blog posts and social media, and not from private copywritten information like movies or books.

You can certainly argue that "downloading public copywritten information for the purposes of model training" should be treated differently from "downloading public copywritten information for the intended use of the copyright holder", but it feels disingenuous to put this comment itself, to which someone has a copyright, into the same category as something not shared publicly like a paid article or a book.

Personally, I think it's a lot like search engines. If you make something public someone can analyze it, link to it, or derivative actions, but they can't copy it and share the copy with others.

load more comments (1 replies)
[–] MehBlah 7 points 2 months ago

Perhaps they should go back to what they were before the greed machine was spun up.

[–] [email protected] 7 points 2 months ago

well fuck you Sam Altman

[–] menemen 6 points 2 months ago* (last edited 2 months ago)

Hello from our companies "we finally need to get more AI" executive conference. I got find a way to get out of this corporate bullshit...

"We are falling behind" my ass.

load more comments
view more: ‹ prev next ›