this post was submitted on 27 Dec 2024
57 points (86.1% liked)

Technology

60133 readers
2751 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
top 34 comments
sorted by: hot top controversial new old
[–] [email protected] 13 points 1 day ago

Dead end, local maximum, tomayto, tomahto.

[–] [email protected] 31 points 1 day ago* (last edited 1 day ago) (3 children)

The right tool for the right job. It's not intelligent, it is just trained. It all boils down to stochastic.

And then there is the ecological aspect...
Or sometimes the moral aspect, if it is used to manage someone's "fate" in application processing. And it might be trained to be racist or misogynist if you use the wrong training data.

[–] [email protected] 4 points 16 hours ago

Yeah. Considering the obscene resources needed for ChatGPT and the others, I don't think the niche use cases where they shine makes it worth it.

[–] rottingleaf 2 points 1 day ago (1 children)

The moral aspect is resolved if you approach building human systems correctly too.

There is a person or an organization making a decision. They may use an "AI", they may use Tarot cards, they may use the applicant's f*ckability from photos. But they are somehow responsible for that decision and it is judged by some technical, non-subjective criteria afterwards.

That's how these things are done properly. If a human system is not designed correctly, then it really doesn't matter which particular technology or social situation will expose that.

But I might have too high expectations of humanity.

[–] [email protected] 3 points 12 hours ago* (last edited 4 hours ago) (1 children)

Accountability of a human decision maker is the way to go. Agreed.

I see the danger when the accountant's job asks for high throughput which enforces fast decision making and the tool (llm) offers fast and easy decisions. What is the accountant going to do, if (s)he just sees cases instead of people and fates?

[–] rottingleaf 1 points 12 hours ago

If consequence for a mistake follows regardless, then it doesn't matter.

Or if you mean the person checking others - one can make a few levels of it. One can have checkers interested in different outcomes, like in criminal justice (... it's supposed to be).

[–] [email protected] 11 points 1 day ago* (last edited 1 day ago) (1 children)

Betteridge's law of headlines

And after all, current "AI" models are just one step on a longer way, which is what I read as the conclusion of the article.

[–] GamingChairModel 9 points 1 day ago* (last edited 1 day ago)

But if you read the article, then you saw that the author specifically concludes that the answer to the question in the headline is "yes."

This is a dead end and the only way forward is to abandon the current track.

[–] db2 3 points 1 day ago (1 children)
[–] [email protected] 1 points 1 day ago (1 children)

Judging just by the headline, the answer should be "No", though, according to Betteridge's law of headlines.

[–] [email protected] 8 points 1 day ago (1 children)

But we don't like AI, therefore anything negative said about it is more plausible than anything positive said about it. You see the dilemma here.

[–] [email protected] 2 points 1 day ago* (last edited 1 day ago) (1 children)

Hmmh, I usually get upvoted for citing Betteridge's law... But not today within this context. Yeah, I'm aware of Lemmy's mentality. I still think what I said holds up 😉

[–] [email protected] 0 points 1 day ago (1 children)

I've found the Fediverse to be a lot "bubblier" than Reddit, I suspect because the communities are smaller. Makes it easier for groupthink to become established. One element of the @technology bubble is a strong anti-AI sentiment, I've kind of given up on getting any useful information on that subject here. Quite unfortunate given how widespread it's getting.

[–] [email protected] 2 points 1 day ago* (last edited 1 day ago)

I'm subscribed to: [email protected] , [email protected] , [email protected] , [email protected] (German)

I think we could co-exist peacefully, if we put in some effort. But that's not how Lemmy works. We use the "All"-Feed instead of subscriptions and then complain we don't want posts about all the topics... Most posts are made to AskLemmy, NoStupidQuestions or Technology, disregarding if any dedicated communities exist... It's a bit of a mess. And yeah, the mob mentality is strong. It helps if you go with the flow. Just don't mention nuanced and complicated details. Simple truths always win. Especially if people like to believe something were true.

I really don't care any more. I mean I agree with most of this problem. Some days half the posts on technology (or more) are about AI. And a lot of that isn't even newsworthy stuff or something with substance. If I were in charge, I'd delete that, tell people to discuss the minute details in some dedicated community and have some more balance. Because there are a lot of other very interesting tech-related topics that could be discussed instead.

And simultaneously, I'd love to see some more activity and engagement in the AI communities.

[–] just_another_person -1 points 1 day ago (1 children)

Yes. That's why everyone is scrambling to create new interoperable model languages and frameworks that work on more efficient hardware.

Almost everything that is productized right now stems from work in the Python world from years ago. It got a swift uptake with Nvidia making it easier to use their hardware on compiled models, but now everyone wants more efficient options.

FPGA presents a huge upside to not being locked into a specific vendor, so some people are going that route. Others are just making their frameworks more modular to support the numerous TPU/NPU processors that everyone and their brother needlessly keeps building into things.

Something will come out of all of this, but right now the community shift is to do things without needing so much goddamn power draw. More efficient modeling will come as well, but that's less important since everything is compiled down to something that is supported by the devices themselves. At the end of the day, this is all compilation and logic, and we just need to do it MUCH leaner and faster than the current ecosystem is creeping towards. It's not only detrimental to the environment, it's also not as profitable. Hopefully the latter makes OpenAI and Microsoft get their shit together instead of building more power plants.

[–] [email protected] 11 points 1 day ago (1 children)

I don't really see how FPGA has a role to play here. What circuit are you going to put on it. If it's tensor multipliers, even at low precision, a GPU will be an order of magnitude faster just on clock speed, and another in terms of density.

What we've got right now has almost nothing to do with python, and everything to do with the compute density of GPUs crossing a threshold. FPGAs are lower density and slower.