this post was submitted on 04 Jun 2024
386 points (91.4% liked)
Technology
59343 readers
5584 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each another!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
Approved Bots
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Look, I can suggest you start this thread over and read it from the top, because the ways this doesn't make much sense have been thoroughly explained.
Because this is a long one and if you were going to do that you would have already, I'll at least summarize the headlines: LLMs exist whether you like them or not, they can be quantized down to more reasonable power usage, are running well locally on laptops and tablets burning just a few watts for just a few seconds (NOW, as you put it). They are just one application of ML tech, and are not useless at all (fuzzy searches with few specific parameters, accessibility features, context-rich explanations of out of context images or text), even if their valid uses are misrepresented by both advocates and detractors. They are far from the only commonplace computing task that is now using a lot more power than the equivalent a few years ago, which is a larger issue than just the popularity of ML apps. Granting that LLMs will exist in any case, running them on a data center is more efficient, and the issue isn't just "power consumption" but also how the power is generated and what the reclamation of the waste products (in this case excess heat and used water) is on the other end.
I genuinely would not recommend that we engage in a back and forth breaking that down because, again, that's what this very long thread has been about already and a) I have heard every argument the AI moral panic has puth forth (and the ones the dumb techbro singularity peddlers have put forth, too), and b) we'd just go down a circular rabbit hole of repeating what we've already established here over and over again and certainly not convince each other of anything (because see point A).
They exist at the current scale because we're not regulating them, not whether we like it or not.
Absolutely not true. Regulations are both in place and in development, and none of them seem like they would prevent any of the applications currently in the market. I know the fearmongering side keeps arguing that a copyright case will stop the development of these but, to be clear, that's not going to happen. All it'll take is an extra line in an EULA to mitigate or investing in the dataset of someone who has a line in their EULA (Twitter, Reddit already, more to come for sure). The industry is actually quite fond of copyright-based training restrictions, as their main effect is most likely to be to close off open source alternatives and make it so that only Meta, Google, and MS/OpenAI can afford model training.
These are super not going away. Regulation is needed, but it's not restricting or eliminating these applications in any way that would make a dent on the also poorly understood power consumption costs.
Regulating markets absolutely does prevent practices in those markets. Literally the point.
Yeah, who's saying it doesn't? It prevents the practices it prevents and allows the rest of the practices.
The regulation you're going to see on this does not, in fact, prevent making LLMs or image generators, though. And it does not, in fact prevent running them and selling them to people.
You guys have gotten it in your head that training data permissions are going to be the roadblock here, and they're absolutely not going to be. There will be common sense options, like opt-outs and opt-out defaults by mandate, just like there are on issues of data privacy under GDPR, but not absolute bans by any means.
So how much did opt-out defaults under GDPR stop social media and advertising companies from running social media and advertising data businesses?
Exactly.
What that will do is make it so you have to own a large set of accessible data, like social media companies do. They are positively salivating at the possibility that AI training will require paying them, since they'll have a user agreement that demands allowing your data to be sold for training. Meanwhile, developers of open alternatives, who are currently running out of a combination of openly accessible online data and monetized datasets put together specifically for research, will face more cost to develop alternatives. Ideally, hope the large AI corporations, too much cost pressure and they will be bullied out of the market, or at least forced to lag behind in quality by several generations.
That's what's currently happening regarding regulation, along with a bunch of more reasonable guardrails about what you should and should not generate and so on. You'll notice I didn't mention anything about power or specific applications there. LLMs and image generators are not going away and their power consumption is not going to be impacted.