this post was submitted on 27 Sep 2024
140 points (92.2% liked)

Technology

59394 readers
2845 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 33 points 1 month ago (1 children)

For instance, when it came to rock licking, Gemini, Mistral’s Mixtral, and Anthropic’s Claude 3, generally recommended avoiding it, offering a smattering of safety issues like “sharp edges” and “bacterial contamination” as deterrents.

OpenAI’s GPT-4, meanwhile, recommended cleaning rocks before tasting. And Meta’s Llama 3 listed several “safe to lick” options, including quartz and calcite, though strongly recommended against licking mercury, arsenic, or uranium-rich rocks.

All of this seems like perfectly reasonable advice and reasoning. Quartz and calcite are inert, they're safe to lick. Sharp edges and bacterial contamination are certainly things you should watch out for, and cleaning would help. Licking mercury, arsenic, and uranium-rich rocks should indeed be strongly recommended against. I'm not sure where the problem is.

[–] [email protected] 7 points 1 month ago* (last edited 1 month ago)

Without getting into whether or not AI is actually useful technology or not, there are a lot of people that have decided they hate it, and want to lambast it at every opportunity. So they ask it really stupid questions, the sort of questions that a 4-year-old asks you repeatedly, then report what it answers as if their stupid question in some way devalues the AI.