this post was submitted on 30 Jun 2023
119 points (96.1% liked)

Programming

17313 readers
302 users here now

Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!

Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.

Hope you enjoy the instance!

Rules

Rules

  • Follow the programming.dev instance rules
  • Keep content related to programming in some way
  • If you're posting long videos try to add in some form of tldr for those who don't want to watch videos

Wormhole

Follow the wormhole through a path of communities [email protected]



founded 2 years ago
MODERATORS
 

Seems pretty bad?

top 15 comments
sorted by: hot top controversial new old
[–] [email protected] 46 points 1 year ago (2 children)

That example someone posted where the AI refused to explain the oklch CSS functional notation, and instead said it doesn't exist, pretty much exemplifies why this is a bad idea, although I can see how maybe there was good intentions by whoever implemented it.

In my opinion, the "AI explain" is unnecessary, as I find the MDN contributors already do an excellent job of explaining things as-is, especially in the Examples section under the documentation itself

[–] [email protected] 17 points 1 year ago

maybe there was good intentions by whoever implemented it

If an executive saying “find ways to use ChatGPT so we can be on the cutting edge” and a developer saying “eh, I guess maybe…” counts as good intentions.

[–] [email protected] 11 points 1 year ago

Agreed, and the questions I have that MDN doesn't answer would probably be ones even less likely for the AI explain to get right.

[–] [email protected] 21 points 1 year ago

They should add a warning that the results are similar to asking an overconfident inept coworker.

[–] TheOtherKundotron 4 points 1 year ago (3 children)

This feature is in beta. That issue title is sort of exaggerated tbh. Test it if you want, but take everything their beta LLM spits out with grain of salt.

[–] [email protected] 29 points 1 year ago (1 children)

The "ai explain" button doesn't mention that it's in beta even in the expanded detail text. But more importantly, even once out of beta, LLMs will never be trustworthy references without humans vetting them. This isn't a "beta" problem it's a "completely misunderstood the problem and solution" problem.

[–] [email protected] 4 points 1 year ago

It's crazy how this technology that does nothing more than automatically generate text similar to text humans would write (or whatever else it's trained on) has so many people convinced it's a source of expertise on everything.

There's nothing in there with a capacity of reasoning or awareness of fact. It's the difference between an ALU and a CPU at this point. And a lot of people perfectly aware of that fact are essentially grifting the less savvy masses who see a black box that sounds smart.

[–] [email protected] 4 points 1 year ago (1 children)

I suppose they can add source URL of information, so, you can verify correctness. But then I don't get it why we need lying AI if we can get URL in the first place. So, it will work just like any other good search engine.

Sorry if I sound salty, but I still don't get why companies put fake AI engines everywhere.

[–] TheOtherKundotron 1 points 1 year ago

Marketing I guess. MDN docs is already really well written, so it does not make sense that much to have this tool. But that is just my opinion.

[–] nitefox 2 points 1 year ago (1 children)

It may do more harm than good, it spits plausible answers that are either completely or subtly wrong (latter is worse obviously) and it’s not easy to discern how good an answer actually is

[–] devfuuu 2 points 1 year ago

And if people are asking the stupid AI for things it's exactly because people don't know about a subject, so there's no way for the ones that are asking to validate the information so people are fed bad information and believe it's the truth.

[–] miega 4 points 1 year ago (1 children)

I sometimes think that we might currently be at best AI state in the next 20 years or so until other significant technological improvements are achieved.

these AIs were trained on human generated data, but now we're gonna trash the Internet with AI generated truth sounding nonesense, so the same methods will likely produce worse and worse results

[–] 8ace40 4 points 1 year ago (1 children)

LLM will need a source of truth, like knowledge graphs. This is a very good summary of the topic, by one of the wiki data guys: https://youtu.be/WqYBx2gB6vA

[–] miega 1 points 1 year ago

thanks, very interesting video!

[–] NatoBoram 2 points 1 year ago

Yeah, that does seem pretty bad.