this post was submitted on 12 Jun 2024
393 points (95.4% liked)

Technology

60021 readers
3299 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 8 points 6 months ago (1 children)

They could make Siri change its voice and Genmoji based on the degree of certainty of the response:

  • Trust me: Arnold as Terminator 😎
  • Eehhhh, could be bullshit: shrugging old man meme 🤷🏻‍♂️
  • Just kiddin' here: whacky Jerry Lewis 🤪

They could sell different voice packages. Revive the ringtone market.

[–] [email protected] 19 points 6 months ago (2 children)

The AI is confidently wrong, that's the whole problem. If there was an easy way to know if it could be wrong we wouldn't have this discussion

[–] AdrianTheFrog 2 points 6 months ago

this paper tries to do that: arxiv.org/pdf/2404.04689

there are also several other techniques I think

[–] [email protected] 1 points 6 months ago

While it can’t “know” its own confidence level, it can distinguish between general knowledge (12” in 1’) and specialized knowledge that requires supporting sources.

At one point, I had a chatGPT memory designed for it to automatically provide sources for specialized knowledge and it did a pretty good job.