this post was submitted on 28 Jan 2025
360 points (99.2% liked)

Technology

61227 readers
6154 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Last week, Copilot made an unsolicited appearance in Microsoft 365. This week, Apple turned on Apple Intelligence by default in its upcoming operating system releases. And it isn't easy to get through any of Google's services without stumbling over Gemini.

Regulators worldwide are keen to ensure that marketing and similar services are opt-in. When dark patterns are used to steer users in one direction or another, lawmakers pay close attention.

But, for some reason, forcing AI on customers is acceptable. Rather than asking "we're going to shovel a load of AI services into your apps that you never asked for, but our investors really need you to use, is this OK?" the assumption instead is that users will be delighted to see their formerly pristine applications cluttered with AI features.

Customers have not asked for any of this. There has been no clamoring for search summaries, no pent-up demand for the revival of a jumped-up Clippy. There is no desire to wreak further havoc on the environment to get an almost-correct recipe for tomato soup. And yet here we are, ready or not.

Without a choice to opt in, the beatings will continue until AI adoption improves or users find that pesky opt-out option.

you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 97 points 2 days ago (7 children)

I am convinced that everyone really has to make this work one way or another because so goddamn much money was - and still is - being spent on this garbage.

[–] [email protected] 76 points 2 days ago (3 children)

We're being asked at work "up our CoPilot usage" to justify the license costs. Pretty sad when you need to be forced to use it.

[–] BroBot9000 39 points 2 days ago (1 children)

Please refuse and at every opportunity let them know how stupid they are for wasting that money.

[–] spankmonkey 7 points 2 days ago (1 children)

Use it and then explain how much of a waste of time it was to get the wrong results.

[–] [email protected] 43 points 2 days ago* (last edited 2 days ago) (1 children)

No, that just plays into their hands. If you complain that it sucks, you're just "using it wrong".

Its better to not use it at all so they end up with dogshit engagement metrics and the exec who approved the spend has to explain to the board why they wasted so much money on something their employees clearly don't want or need to use.

Remember, they won't show the complaints, just the numbers, so those numbers have to suck if you really want the message to get through.

[–] BroBot9000 10 points 2 days ago
[–] ObviouslyNotBanana 3 points 2 days ago

Just because you brought up copilot, I think people need to see this

[–] eager_eagle 2 points 2 days ago

lmao my workplace encourages use / exploration of LLMs when useful, but that's stupid

[–] [email protected] 22 points 2 days ago* (last edited 2 days ago)

Correct. It's about metrics. They're making AI opt-out because they desparately need to pump user engagement numbers, even if those numbers don't mean anything.

It's all for the shareholders. Big tech has been, for a while now, chasing a new avenue for meteoric growth, because that's what investors have come to expect. So they went all in on AI, to the tune of billions upon billions, and came crashing, hard, into the reality that consumers don't need it and enterprise can't use it;

Transformer models have two fatal flaws; the hallucination problem - to which there is still no solution - makes them unsuitable for enterprise applications, and their cost per operation make them unaffordable for retail customer applications (ie, a chatbot that gives you synonyms while you write is the sort of thing people will happily use, but won't pay $40 a month for).

So now the C-suites are standing over the edge of the burning trash fire they pushed all that money into, knowing that at any moment their shareholders are going to wake up and shove them into it too. They've got to come up with some kind of proof that this investment is paying off. They can't find that proof in sales, because no one is buying, so instead they're going to use "engagement"; shove AI into everything, to the point where people basically wind up using it by accident, then use those metrics to claim that everyone loves it. And then pray to God that one of those two fatal flaws will be solved in time to make their investments pay off in the long run.

[–] [email protected] 24 points 2 days ago

Yeah, it's sunk cost fallacy all the way down. We're just being harvested because...fuck us I guess.

[–] [email protected] 14 points 2 days ago

It's a combination of sunk cost and FOMO.

[–] drahardja 2 points 1 day ago

This is it. “We spent so damn much money on this, we gotta see some NUMBERS on the dashboard!”

[–] SlopppyEngineer 2 points 2 days ago

They have to put it into everything and have people and apps depend on it before the AI bubble pops so after the pop it's too difficult to remove or break the dependency. As long as it's in there they can charge subscription and fees for it.

[–] [email protected] 0 points 2 days ago

"coPilot, what is the Sunk Cost Fallacy?"