this post was submitted on 10 Jul 2023
354 points (91.7% liked)

Technology

59598 readers
3107 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Which of the following sounds more reasonable?

  • I shouldn't have to pay for the content that I use to tune my LLM model and algorithm.

  • We shouldn't have to pay for the content we use to train and teach an AI.

By calling it AI, the corporations are able to advocate for a position that's blatantly pro corporate and anti writer/artist, and trick people into supporting it under the guise of a technological development.

you are viewing a single comment's thread
view the rest of the comments
[–] assassin_aragorn 59 points 1 year ago (3 children)

The funniest thing I've seen on this is the ChatGPT CEO, Altman, talking about how he's a bit afraid of what they've created and how it needs limitations -- and then when the EU begins to look at regulations, he immediately rejects the concept, to the point of threatening to leave the European market. It's incredibly transparent what they're doing.

Unfortunately I don't know enough about the technology to say if the algorithms and concepts themselves are novel, but without a doubt they couldn't exist without modern computing power capabilities.

[–] [email protected] 20 points 1 year ago* (last edited 1 year ago) (2 children)

I can tell for a fact that there's nothing new going on. Only the MASSIVE investment from Microsoft to allow them to train on an insane amount of data. I am no "expert" per se, but I've been studying and working with AI for over a decade - so feel free to judge my reply as you please

[–] [email protected] -2 points 1 year ago (1 children)

nothing new going on

Uhhhh the available models are improving by leaps and bounds by the month, and there's quite a bit of tangible advancement happening every week. Even more critically the models that can be run on a single computer are very quickly catching up to those that just a year or two ago required some percentage of a hyperscaler's datacenter to operate

Unless you mean to say that the current insane pace of advancement is all built off of decades of research and a lot of the specific advancements recently happen to be fairly small innovations into previous research infused with a crapload of cash and hype (far more than most researchers could only dream of)

[–] [email protected] 9 points 1 year ago* (last edited 1 year ago)

all built off of decades of research and a lot of the specific advancements recently happen to be fairly small innovations into previous research infused with a crapload of cash and hype>

That's exactly what I mean! The research projects I've been 5-7 years ago had already created LLMs like this that were as impressive as GPT. I don't mean that the things that are going on aren't impressive, I just mean that there's nothing actually new. That's all. IT's similar to the previous hype wave that happened in AI with machine learning models when google was pushing deep learning. I really just want to point that out.

EDIT: Typo

[–] SCB -3 points 1 year ago (1 children)

nothing new going on

I can't think of anything less accurate to say about LLMs other than that they're a world-ending threat.

This is a bit like saying "The internet is a cute thing for tech nerds but will never go mainstream" in like 1995.

[–] [email protected] 3 points 1 year ago (1 children)
[–] SCB 1 points 1 year ago
[–] [email protected] 11 points 1 year ago (1 children)

The concepts themselves are some 30 years old, but storage capacity and processing speed have only recently reached a point where generative AI outperforms competing solutions.

But regarding the regulation thing, I don't know what was said or proposed, and this is just me playing devil's advocate: but could it be that the CEO simply doesn't agree with the specifics of the proposed regulations while still believing that some other, different kind of regulation should exist?

[–] [email protected] 15 points 1 year ago (1 children)

Certainly could be, but probably an optimistic take. Most likely they're just trying to do what corporations have been doing for ages, which is to weaponize government policy to prevent competition. They don't want restrictions that will materially impact their product, they want restrictions that will materially impact startups to make it more difficult for them to intrude on the established space.

[–] jumperalex 7 points 1 year ago

I think if you fed your response into ChatGPT and asked it to summarize in two words it would return,

"Regulatory Capture"

[–] [email protected] -3 points 1 year ago (1 children)

And what are they doing? To remind, OpenAI is non-profit.

[–] [email protected] 6 points 1 year ago (1 children)

I thought they moved to for profit back in 2019?

[–] [email protected] 0 points 1 year ago (1 children)
[–] BetaDoggo_ 1 points 1 year ago

They're a non-profit managed by a for-profit, who's received most of their funding from another for-proft.