this post was submitted on 31 Jul 2024
308 points (99.7% liked)

TechTakes

1483 readers
251 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
all 49 comments
sorted by: hot top controversial new old
[–] TommySoda 58 points 4 months ago* (last edited 4 months ago) (2 children)

"Can AI do [Blank]" is getting pretty old. They will literally fill in that blank with anything they can come up with and it's getting kinda silly.

Here's a list of potential new AI articles I predict coming out within the next year:

  • "Can AI teach us more about the dinosaurs?"
  • "How AI will solve the climate crisis."
  • "New AI technology let's you speak with deceased loved ones with staggering accuracy."
  • "How AI can help you save money."
  • "New AI model lets us translate dead languages."
  • "Soon all your friends will be AI."
  • "AI can help you lose weight."
  • "How we can use AI to find aliens."

I'm sure at least one of these articles already exists. Literally all they are trying to do is make money with half baked ideas or steal your personal data.

[–] [email protected] 23 points 4 months ago (2 children)

a fun little rule of thumb that I like to apply is that whenever an article's headline is a question you may safely presume the answer is usually no.

[–] [email protected] 17 points 4 months ago (1 children)
[–] Angry_Autist 7 points 4 months ago* (last edited 4 months ago) (1 children)

I propose the Gerard Extension for Betteridge's Law: by appending ', and what the hell is wrong with you?' to Betteridge's output.

As per the headline.

[–] [email protected] 7 points 4 months ago (1 children)

* The Castor-Gerard Extension

[–] [email protected] 11 points 4 months ago

worst prog rock band ever

[–] mriormro 7 points 4 months ago

They literally mentioned this in the article.

[–] [email protected] 10 points 4 months ago (4 children)
  • "Can AI teach us more about the dinosaurs?"

done already

  • "How AI will solve the climate crisis."

pretty sure I've seen this

  • "New AI technology let's you speak with deceased loved ones with staggering accuracy."

done

  • "How AI can help you save money."

done

  • "New AI model lets us translate dead languages."

done

  • "Soon all your friends will be AI."

pretty sure japan already has this problem

  • "AI can help you lose weight."

i'd be shocked if not already done

  • "How we can use AI to find aliens."

pretty sure I've seen this

i think these are predictions of last week

[–] [email protected] 13 points 4 months ago

The great thing about using AI to search for aliens is that it'll find them, whether there's any out there or not.

[–] uranibaba 9 points 4 months ago (2 children)
“Soon all your friends will be AI.”

pretty sure japan already has this problem

I remember reading not to long ago about people having AI as girlfriends.

[–] [email protected] 12 points 4 months ago (1 children)

And then the company behind it made the AI girlfriends less horny unless you paid for the premium plan.

[–] [email protected] 5 points 4 months ago (1 children)

This is the Blade Runner 2024 made to the best of our abilities

[–] [email protected] 6 points 4 months ago

The blade runner we have at home

[–] RememberTheApollo_ 7 points 4 months ago

We already know how to solve the climate crisis. We just don’t want to because it would cost too much, inconvenience us, and really upset the shareholders.

The only reason to ask AI would be like asking the butler to take out the trash, we just can’t be bothered to do even that much work and want to hit the “easy” button.

[–] TommySoda 4 points 4 months ago

Sounds about right. And I'm sure at least half of them are just click bait while the others are wishful thinking. Or just sad...

[–] [email protected] 29 points 4 months ago (1 children)

that diffusion of responsibility is a thing that already happened with crypto too

no officer, it's not a ponzi because it's a Distributed Future of Finance™, go pound sand, do you hate progress?

[–] [email protected] 15 points 4 months ago* (last edited 4 months ago)

I remember seeing some crytpo bro smugly explain that his obviously illegal business model was fine because "It's a DAO, I'm just a community member."

[–] [email protected] 22 points 4 months ago (1 children)

Based on your record of shitposting, our AI model predicts that your final wish is that your entire estate be left to ... Marc Andreessen? Is that correct? If so, blink as if in surprise.

[–] Crowfiend 3 points 4 months ago* (last edited 4 months ago)

I knew your post would be good when you mentioned Marc Anderson, then you followed up with the blinking thing. Will you run for office?

No joke, no trolling, just good shit mate.

[–] [email protected] 16 points 4 months ago* (last edited 4 months ago)

Can't wait for the profit above care tier hospitals to have their own AI that allows patients in a vegetative state to "freely" tell those same hospitals that they need to remain alive on whatever system is keeping them alive for as long as possible, making sure their family incurs the max amount of debt/bills possible. I'd think most middle aged or older family members would absolutely believe the AI is actually connected to their brain and is telling them it's what they want since they seem to be a lot more gullible about anything AI generated being real, if fakebook is to be believed.

[–] Alexstarfire 5 points 4 months ago
[–] [email protected] 1 points 4 months ago (5 children)

I mean, while this idea is obviously a stupid one, I have seen some suggestion that an AI could be used to help interperet the brain activity of patients that are capable of thought but not communication, and thus help them communicate with doctors, rather than try to figure out what they might have said from prior history.

[–] [email protected] 22 points 4 months ago

"could" is a word meaning "doesn't"

[–] [email protected] 15 points 4 months ago* (last edited 4 months ago) (1 children)

I do not recommend using the word "AI" as if it refers to a single thing that encompasses all possible systems incorporating AI techniques. LLM guys don't distinguish between things that could actually be built and "throwing an LLM at the problem" -- you're treating their lack-of-differentiation as valid and feeding them hype.

[–] [email protected] 3 points 4 months ago (1 children)

I use a term I've seen used before, I'm not familiar enough with the details of the tech to know what what more technical term applies to this kind of device, but not to other types, and especially not what term will be generally recognized as referring to such. The hype guys are going to hype themselves up regardless in any case, seeing as that type tend to exist in an echo chamber as far as I can see.

[–] [email protected] 4 points 4 months ago

maybe with blockchain,

[–] [email protected] 7 points 4 months ago

🦀 THEY DID NEUROIMAGING ON A DEAD SALMON 🦀

[–] pavnilschanda 0 points 4 months ago (2 children)

As an autistic who struggles with communication and organizing thoughts, LLMs have been helping me process emotions and articulating things. Not perfectly in the way that you'd describe (hence i mostly don't use LLM outputs themselves as replies), but my situation is much better than pre-November 2022

[–] [email protected] 2 points 4 months ago

of course you get downvotes for this, it's so exhausting how people act as if AI is just a strictly universal evil, and cannot possibly have ANY actually good use case..

[–] breadsmasher 1 points 4 months ago

Black Mirror did an episode on this, or similar