this post was submitted on 03 Jun 2024
1475 points (98.0% liked)
People Twitter
5291 readers
2754 users here now
People tweeting stuff. We allow tweets from anyone.
RULES:
- Mark NSFW content.
- No doxxing people.
- Must be a tweet or similar
- No bullying or international politcs
- Be excellent to each other.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yeah, llms are a really great advancement in language processing, and the ability to let them hook into other systems after sussing out what the user means is legitimately pretty cool.
The issue is that people keep mistaking articulate mimicry of confidence and knowledge as actual knowledge and capability.
It's doubly frustrating at the moment because people keep thinking that llms are what AI is, and not just a type of AI. It's like how now people hear "crypto" and assume you're talking about the currency scheme, which is needlessly frustrating if you work in the security sector.
Making a system that looked at your purchase history (no real other way to get that data reliably otherwise), identified the staple goods you bought often and then tried to predict the cadence that you buy them at would be a totally feasible AI problem. Wouldn't be even remotely appropriate for an llm until the system found the price by (probably) crudely scraping grocery store websites and then wanted to tell you where to go, because they're good at things like "turn this data into a friendly shopping list message "
To be completely fair, the confusion is because of the marketing. You and I both know that Tesla cars can't really drive themselves for the same reasons you outlined but the typical person sees "autonomous mode" or "self-driving" applied to what they are buying.
People treat llms like something out of a super hero movie because they're led to believe it to be the case. The people shoveling in the money based on promises and projections are the root cause.