this post was submitted on 23 Sep 2024
31 points (97.0% liked)

TechTakes

1384 readers
308 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 1 year ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

Last week's thread

(Semi-obligatory thanks to @dgerard for starting this)

(page 2) 50 comments
sorted by: hot top controversial new old
[–] [email protected] 11 points 1 month ago (8 children)

Was salivating all weekend waiting for this to drop, from Subbarao Kambhampati's group:

Ladies and gentlemen, we have achieved block stacking abilities. It is a straight shot from here to cold fusion! ... unfortunately, there is a minor caveat:

Looks like performance drops like a rock as number of steps required increases...

load more comments (8 replies)
[–] [email protected] 11 points 1 month ago (2 children)

Saw an unexpected Animatrix reference on Twitter today - and from an unrepentant promptfondler, no less:

animatrix promptfondler

This ended up starting a lengthy argument with an "AI researcher" (read: promptfondler with delusions of intelligence), which you can read if you wanna torture yourself.

[–] [email protected] 10 points 1 month ago (1 children)

yes. that's all true, but academics and artists and leftists are actually calling for Buttlerian jihad all the time. when push comes to shove they will ally with fascists on AI

This guy severely underestimates my capacity for being against multiple things at the same time.

[–] [email protected] 8 points 1 month ago

The type of guy who was totally convinced by the 'but what if the AI needs to generate slurs to stop the nukes?' argument.

[–] [email protected] 9 points 1 month ago

Guy invented a new way to misinterpret the matrix, nice. Was getting tired of all the pilltalk

[–] [email protected] 10 points 1 month ago (2 children)

I vaguely remember mentioning this AI doomer before, but I ended up seeing him openly stating his support for SB 1047 whilst quote-tweeting a guy talking about OpenAI's current shitshow:

pro-1047 doomer

I've had this take multiple times before, but now I feel pretty convinced the "AI doom/AI safety" criti-hype is going to end up being a major double-edged sword for the AI industry.

The industry's publicly and repeatedly hyped up this idea that they're developing something so advanced/so intelligent that it could potentially cause humanity to get turned into paperclips if something went wrong. Whilst they've succeeded in getting a lot of people to buy this idea, they're now facing the problem that people don't trust them to use their supposedly world-ending tech responsibly.

load more comments (2 replies)
[–] [email protected] 10 points 1 month ago* (last edited 1 month ago) (4 children)

People are "blatantly stealing my work," AI artist complains

When Jason Allen submitted his bombastically named Théâtre D’opéra Spatial to the US Copyright Office, they weren't so easily fooled as the judges back in Colorado. It was decided that the image could not be copyrighted in its entirety because, as an AI-generated image, it lacked the essential element of “human authorship". The office decided that, at best, Allen could copyright specific parts of the piece that he worked on himself in Photoshop.

“The Copyright Office’s refusal to register Theatre D’Opera Spatial has put me in a terrible position, with no recourse against others who are blatantly and repeatedly stealing my work without compensation or credit.” If something about that argument rings strangely familiar, it might be due to the various groups of artists suing the developers of AI image generators for using their work as training data without permission.

via @[email protected]

load more comments (4 replies)
[–] [email protected] 9 points 1 month ago* (last edited 1 month ago)

Our anti-AI milita will be called "The Artists' Rifles"

[–] [email protected] 8 points 1 month ago
[–] [email protected] 8 points 1 month ago* (last edited 1 month ago) (4 children)

HN seems to be particularly deranged today, doesn't it?

It mostly seems to be a mopey debate over whether Saltman's impending apotheosis is good or bad.

load more comments (4 replies)
[–] [email protected] 8 points 1 month ago (13 children)

True believers at Vox' Future Perfect "vertical" let out a hearfelt REEEEEE as Saltman makes the obvious move to secure all the profits

https://www.vox.com/future-perfect/374275/openai-just-sold-you-out

[–] [email protected] 9 points 1 month ago* (last edited 1 month ago)

oh hey its that reoccurring thought I have about the "AI Safety" criti-hype popping back into my head

Expanding on that, part of me feels Altman is gonna find all the rhetoric he made about "AI doom" being used against him in the future - man's given the true believers reason to believe he'd commit omnicide-via-spicy-autocomplete for a quick buck.

Hell, the true believers who made this pretty explicitly pointed out Altman's made arguing for regulation a lot easier:

ai safety irony

[–] [email protected] 8 points 1 month ago (2 children)

shocked that scorpions in a scorpion's nest funded by their scorpion mates might have fallen into stinging

load more comments (2 replies)
load more comments (11 replies)
[–] [email protected] 8 points 1 month ago

a nsfw found in the wild

[–] [email protected] 8 points 1 month ago

Job interviews by AI avatars are real and happening, apparently https://www.404media.co/ai-avatars-are-doing-job-interviews-now/

load more comments
view more: ‹ prev next ›