this post was submitted on 24 Jan 2024
292 points (97.4% liked)

Technology

59674 readers
4464 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 23 points 10 months ago* (last edited 10 months ago) (2 children)

All the more reason that devs and admins need to take responsibility and NOT roll out "AI" solutions withoit backstopping them with human verification, or at minimum ensure that the specific solutions they employ are ready for production.

It's all cool and groovy that we have a new software stack that can remove a ton of labor from humans, but if it's too error-prone, is it really useful? I get that the bean counters and suits are ready to boot the data entry and other low level employees to boost their bottom line, but this will become a race to the bottom via blowing their collective loads too early.

Though let's be real, we already know that too many companies are going to do this and then try to absolve themselves of liability when shit goes to hell because of their shit.

[–] FMT99 25 points 10 months ago (1 children)

Having worked in IT for many years, bosses only hear the "it can be done" part and never the "but we should add these precautions" or "but we should follow these best practices"

Those translate to "those developers want to add unnecessary extra costs" to them.

[–] [email protected] 3 points 10 months ago

"So we can create the dinosaurs immediately you say?"

[–] [email protected] -5 points 10 months ago* (last edited 10 months ago) (2 children)

Soon there will be modules added to LLMs, so that they can learn real logic and use that to (fact)check the output on their own.

https://deepmind.google/discover/blog/alphageometry-an-olympiad-level-ai-system-for-geometry/

This is so awesome, watch Yannic explaining it:

https://youtu.be/ZNK4nfgNQpM?si=CN1BW8yJD-tcIIY9

[–] [email protected] 2 points 10 months ago

You might be presenting it backwards. We need LLMs to be right-sized for translation between pure logical primitives and human language. Let a theorem prover or logical inference system (probably written in Prolog :-) ) provide the smarts. A LLM can help make the front end usable by regular people.

[–] [email protected] 1 points 10 months ago

Here is an alternative Piped link(s):

https://piped.video/ZNK4nfgNQpM?si=CN1BW8yJD-tcIIY9

Piped is a privacy-respecting open-source alternative frontend to YouTube.

I'm open-source; check me out at GitHub.