this post was submitted on 03 Mar 2025
18 points (100.0% liked)

TechTakes

1666 readers
216 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
 

Need to let loose a primal scream without collecting footnotes first? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid: Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned soo many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this.)

top 50 comments
sorted by: hot top controversial new old
[–] [email protected] 3 points 48 minutes ago (2 children)

HEY GUYS CHECK THIS OUT

https://www.youtube.com/watch?v=c1yHrZR_Sgg

with the awful (systems) assistance of self, fasterandworse and jp of this parish

no idea when i'll do another one, this one was 3 hrs faff for 5 min video lol

things I need: a better mic, a chair that doesn't swivel

[–] [email protected] 3 points 5 minutes ago (1 children)

how much is templated now? reckon it'll be 3hrs every time?

[–] [email protected] 2 points 4 minutes ago

if i stop saying "um" all the time that's half an hour less in post right there

[–] [email protected] 2 points 21 minutes ago (1 children)

lol I have no idea if anyone would recognize me by that reference

[–] [email protected] 2 points 15 minutes ago

(I don’t mind, just surprised at someone Of The Internet actually using that name :D)

[–] [email protected] 4 points 3 hours ago

what's that sound? is it the sound of a previous post coming past? naaaah, I'm sure it can't be that. discord's a Bro™️, and discord super totes Won't Fuck The Users®️, I'm sure I'll shortly be told by some vapid fencesitter that this will all be Perfectly Okay!

[–] [email protected] 8 points 19 hours ago (3 children)
[–] [email protected] 6 points 2 hours ago* (last edited 2 hours ago) (1 children)

Yudkowsky was trying to teach people how to think better – by guarding against their cognitive biases, being rigorous in their assumptions and being willing to change their thinking.

No he wasn't.

In 2010 he started publishing Harry Potter and the Methods of Rationality, a 662,000-word fan fiction that turned the original books on their head. In it, instead of a childhood as a miserable orphan, Harry was raised by an Oxford professor of biochemistry and knows science as well as magic

No, Hariezer Yudotter does not know science. He regurgitates the partial understanding and the outright misconceptions of his creator, who has read books but never had to pass an exam.

Her personal philosophy also draws heavily on a branch of thought called “decision theory”, which forms the intellectual spine of Miri’s research on AI risk.

This presumes that MIRI's "research on AI risk" actually exists, i.e., that their pitiful output can be called "research" in a meaningful sense.

“Ziz didn’t do the things she did because of decision theory,” a prominent rationalist told me. She used it “as a prop and a pretext, to justify a bunch of extreme conclusions she was reaching for regardless”.

"Excuse me, Pot? Kettle is on line two."

[–] [email protected] 7 points 2 hours ago* (last edited 2 hours ago)

It goes without saying that the AI-risk and rationalist communities are not morally responsible for the Zizians any more than any movement is accountable for a deranged fringe.

When the mainstream of the movement is ve zhould chust bomb all datacenters, maaaaaybe they are?

[–] [email protected] 5 points 2 hours ago (2 children)

I feel like it still starts off too credulous towards the rationalists, but it's still an informative read.

Around this time, Ziz and Danielson dreamed up a project they called “the rationalist fleet”. It would be a radical expansion of their experimental life on the water, with a floating hostel as a mothership.

Between them, Scientology and the libertarians, what the fuck is it with these people and boats?

[–] [email protected] 4 points 1 hour ago

a really big boat is the ultimate compound. escape even the surly bonds of earth!

[–] [email protected] 4 points 2 hours ago (1 children)

I assume its to get them to cooperate.

[–] [email protected] 4 points 1 hour ago

Ah, yes. The implication.

[–] [email protected] 11 points 14 hours ago (3 children)

Ziz helpfully suggested I use a gun with a potato as a makeshift suppressor, and that I might destroy the body with lye

I looked up a video of someone trying to use a potato as a suppressor and was not disappointed.

[–] [email protected] 5 points 8 hours ago (1 children)

if this is peak rationalist gunsmithing, i wonder how their peak chemical engineering looks like

the body is placed in a pressure vessel which is then filled with a mixture of water and potassium hydroxide, and heated to a temperature of around 160 °C (320 °F) at an elevated pressure which precludes boiling.

Also, lower temperatures (98 °C (208 °F)) and pressures may be used such that the process takes a leisurely 14 to 16 hours.

https://en.wikipedia.org/wiki/Water_cremation

[–] [email protected] -2 points 8 hours ago (1 children)

@skillissuer

I’m fairly sure that a 50 gallon drum of lye at room temperature will take care of a body in a week or two. Not really suited to volume "production”, which is what water cremation businesses need.

[–] [email protected] 5 points 8 hours ago

as a rule of thumb, everything else equal, every increase in temperature 10C reaction rates go up 2x or 3x, so it would be anywhere between 250x and 6500x longer. (4 months to 10 years??) but everything else really doesn't stay equal here, because there are things like lower solubility of something that now coats something else and prevents reaction, fat melting, proteins denaturing thermally, lack of stirring from convection and boiling,

it will also reek of ammonia the entire time

[–] [email protected] 7 points 13 hours ago

you undersold this

that guy's face, amazing

[–] [email protected] 4 points 12 hours ago

@sailor_sega_saturn

He made a fancy coatrack.

[–] [email protected] 10 points 22 hours ago* (last edited 21 hours ago) (2 children)

Fellas, 2023 called. Dan (and Eric Schmidt wtf, Sinophobia this man down bad) has gifted us with a new paper and let me assure you, bombing the data centers is very much back on the table.

"Superintelligence is destabilizing. If China were on the cusp of building it first, Russia or the US would not sit idly by—they'd potentially threaten cyberattacks to deter its creation.

@ericschmidt @alexandr_wang and I propose a new strategy for superintelligence. 🧵

Some have called for a U.S. AI Manhattan Project to build superintelligence, but this would cause severe escalation. States like China would notice—and strongly deter—any destabilizing AI project that threatens their survival, just as how a nuclear program can provoke sabotage. This deterrence regime has similarities to nuclear mutual assured destruction (MAD). We call a regime where states are deterred from destabilizing AI projects Mutual Assured AI Malfunction (MAIM), which could provide strategic stability. Cold War policy involved deterrence, containment, nonproliferation of fissile material to rogue actors. Similarly, to address AI's problems (below), we propose a strategy of deterrence (MAIM), competitiveness, and nonproliferation of weaponizable AI capabilities to rogue actors. Competitiveness: China may invade Taiwan this decade. Taiwan produces the West's cutting-edge AI chips, making an invasion catastrophic for AI competitiveness. Securing AI chip supply chains and domestic manufacturing is critical. Nonproliferation: Superpowers have a shared interest to deny catastrophic AI capabilities to non-state actors—a rogue actor unleashing an engineered pandemic with AI is in no one's interest. States can limit rogue actor capabilities by tracking AI chips and preventing smuggling. "Doomers" think catastrophe is a foregone conclusion. "Ostriches" bury their heads in the sand and hope AI will sort itself out. In the nuclear age, neither fatalism nor denial made sense. Instead, "risk-conscious" actions affect whether we will have bad or good outcomes."

Dan literally believed 2 years ago that we should have strict thresholds on model training over a certain size lest big LLM would spawn super intelligence (thresholds we have since well passed, somehow we are not paper clip soup yet). If all it takes to make super-duper AI is a big data center, then how the hell can you have mutually assured destruction like scenarios? You literally cannot tell what they are doing in a data center from the outside (maybe a building is using a lot of energy, but not like you can say, "oh they are running they are about to run superintelligence.exe, sabotage the training run" ) MAD "works" because it's obvious the nukes are flying from satellites. If the deepseek team is building skynet in their attic for 200 bucks, this shit makes no sense. Ofc, this also assumes one side will have a technology advantage, which is the opposite of what we've seen. The code to make these models is a few hundred lines! There is no moat! Very dumb, do not show this to the orangutan and muskrat. Oh wait! Dan is Musky's personal AI safety employee, so I assume this will soon be the official policy of the US.

link to bs: https://xcancel.com/DanHendrycks/status/1897308828284412226#m

[–] [email protected] 6 points 8 hours ago (1 children)

Mutual Assured AI Malfunction (MAIM)

The proper acronym should be M'AAM. And instead of a 'roman salut' they can tip their fedora as a distinctive sign 🤷‍♂️

[–] [email protected] 3 points 3 hours ago

the only part of this I really approve of is how likely these fuckers are to want to Speak To The Manager

[–] [email protected] 6 points 13 hours ago (1 children)

I guess now that USAID is being defunded and the government has turned off their anti-russia/china propaganda machine, private industry is taking over the US hegemony psyop game. Efficient!!!

/s /s /s I hate it all

[–] [email protected] 6 points 8 hours ago

If they're gonna fearmonger can they at least be creative about it?!?! Everyone's just dusting off the mothballed plans to Quote-Unquote "confront" Chy-na after a quarter-century detour of fucking up the Middle East (moreso than the US has done in the past)

[–] [email protected] 8 points 1 day ago (1 children)
[–] [email protected] 8 points 1 day ago

If I had Bluesky access on my phone, I'd be dropping so much lore in that thread. As a public service. And because I am stuck on a slow train.

[–] [email protected] 11 points 1 day ago* (last edited 1 day ago) (1 children)

New ultimate grift dropped, Ilya Sutskever gets $2B in VC funding, promises his company won't release anything until ASI is achieved internally.

[–] [email protected] 6 points 17 hours ago

I'm convinced that these people have no choice but to do their next startup, especially if their names are already prominent in the press like Sutskever and Murati. Once you're off the grift train, there is no easy way back on. I guess you can maybe sneak back in as a VC staffer or an independent board member, but that doesn't seem quite as remunerative.

[–] [email protected] 11 points 1 day ago (2 children)

In other news, a piece from Paris Marx came to my attention, titled "We need an international alliance against the US and its tech industry". Personally gonna point to a specific paragraph which caught my eye:

The only country to effectively challenge [US] dominance is China, in large part because it rejected US assertions about the internet. The Great Firewall, often solely pegged as an act of censorship, was an important economic policy to protect local competitors until they could reach the scale and develop the technical foundations to properly compete with their American peers. In other industries, it’s long been recognized that trade barriers were an important tool — such that a declining United States is now bringing in its own with the view they’re essential to projects its tech companies and other industries.

I will say, it does strike me as telling that Paris was able to present the unofficial mascot of Chinese censorship this way without getting any backlash.

[–] [email protected] 6 points 7 hours ago* (last edited 7 hours ago)
[–] [email protected] 7 points 1 day ago

If Paris Marx is the little domino that causes total collapse of US hegemony, I’ll join the patreon at the highest tier forever

[–] [email protected] 8 points 1 day ago (1 children)

New piece from Techdirt: Why Techdirt Is Now A Democracy Blog (Whether We Like It Or Not)

Strongly recommended reading overall, and strongly recommended you check out Techdirt - they've been doing some pretty damn good reporting on the current shitshow we're living through.

[–] [email protected] 8 points 1 day ago

I've read Masnick for over 20 years and he's never learnt to write coherently. At least this one isn't blaming Europe.

[–] [email protected] 9 points 1 day ago (1 children)

another cameo appearance in the TechTakes universe from George Hotz with this rich vein of sneerable material: The Demoralization is just Beginning

wowee where to even start here? this is basically just another fucking neoreactionary screed. as usual, some of the issues identified in the piece are legitimate concerns:

Wanna each start a business, pass dollars back and forth over and over again, and drive both our revenues super high? Sure, we don’t produce anything, but we have companies with high revenues and we can raise money based on those revenues...

... nothing I saw in Silicon Valley made any sense. I’m not going to go into the personal stories, but I just had an underlying assumption that the goal was growth and value production. It isn’t. It’s self licking ice cream cone scams, and any growth or value is incidental to that.

yet, when it comes to engaging with this issues, the analysis presented is completely detached from reality and void of any evidence of more than a doze seconds of thought. his vision for the future of America is not one that

kicks the can further down the road of poverty, basically embraces socialism, is stagnant, is stale, is a museum

but one that instead

attempt[s] to maintain an empire.

how you may ask?

An empire has to compete on its merits. There’s two simple steps to restore american greatness:

  1. Brain drain the world. Work visas for every person who can produce more than they consume. I’m talking doubling the US population, bringing in all the factory workers, farmers, miners, engineers, literally anyone who produces value. Can we raise the average IQ of America to be higher than China?

  2. Back the dollar by gold (not socially constructed crypto), and bring major crackdowns to finance to tie it to real world value. Trading is not a job. Passive income is not a thing. Instead, go produce something real and exchange it for gold.

sadly, Hotz isn't exactly optimistic that the great american empire will be restored, for one simple reason:

[the] people haven’t been demoralized enough yet

[–] [email protected] 9 points 1 day ago
load more comments
view more: next ›