blakestacey

joined 2 years ago
MODERATOR OF
[–] [email protected] 5 points 1 month ago

There's a "critique of functional decision theory"... which turns out to be a blog post on LessWrong... by "wdmacaskill"? That MacAskill?!

[–] [email protected] 4 points 1 month ago

If you want to read Yudkowsky's explanation for why he doesn't spend more effort on academia, it's here.

spoiler alert: the grapes were totally sour

[–] [email protected] 6 points 1 month ago (2 children)

We have a few Wikipedians who hang out here, right? Is a preprint by Yud and co. a sufficient source to base an entire article on "Functional Decision Theory" upon?

[–] [email protected] 11 points 1 month ago (4 children)

If you go over to LessWrong, you can get some ideas of what is possible

[–] [email protected] 9 points 1 month ago* (last edited 1 month ago) (3 children)

You might think that this review of Yud's glowfic is an occasion for a "read a second book" response:

Yudkowsky is good at writing intelligent characters in a specific way that I haven't seen anyone else do as well.

But actually, the word intelligent is being used here in a specialized sense to mean "insufferable".

Take a stereotypical fantasy novel, a textbook on mathematical logic, and Fifty Shades of Grey.

Ah, the book that isn't actually about kink, but rather an abusive relationship disguised as kink — which would be a great premise for an erotic thriller, except that the author wasn't sufficiently self-aware to know that's what she was writing.

[–] [email protected] 12 points 1 month ago (1 children)

Among the leadership of the biggest AI capability companies (OpenAI, Anthropic, Meta, Deepmind, xAI), at least 4/5 have clearly been heavily influenced by ideas from LessWrong.

I'm trying, but I can't not donate any harder!

The most popular LessWrong posts, SSC posts or books like HPMoR are usually people's first exposure to core rationality ideas and concerns about AI existential risk.

Unironically the better choice: https://archiveofourown.org/donate

[–] [email protected] 13 points 1 month ago* (last edited 1 month ago) (4 children)

The post:

I think Eliezer Yudkowsky & many posts on LessWrong are failing at keeping things concise and to the point.

The replies: "Kolmogorov complexity", "Pareto frontier", "reference class".

[–] [email protected] 17 points 1 month ago

The lead-in to that is even "better":

This seems particularly important to consider given the upcoming conservative administration, as I think we are in a much better position to help with this conservative administration than the vast majority of groups associated with AI alignment stuff. We've never associated ourselves very much with either party, have consistently been against various woke-ish forms of mob justice for many years, and have clearly been read a non-trivial amount by Elon Musk (and probably also some by JD Vance).

"The reason for optimism is that we can cozy up to fascists!"

[–] [email protected] 13 points 1 month ago

The collapse of FTX also caused a reduction in traffic and activity of practically everything Effective Altruism-adjacent

Uh-huh.

[–] [email protected] 20 points 1 month ago (2 children)

An interesting thing came through the arXiv-o-tube this evening: "The Illusion-Illusion: Vision Language Models See Illusions Where There are None".

Illusions are entertaining, but they are also a useful diagnostic tool in cognitive science, philosophy, and neuroscience. A typical illusion shows a gap between how something "really is" and how something "appears to be", and this gap helps us understand the mental processing that lead to how something appears to be. Illusions are also useful for investigating artificial systems, and much research has examined whether computational models of perceptions fall prey to the same illusions as people. Here, I invert the standard use of perceptual illusions to examine basic processing errors in current vision language models. I present these models with illusory-illusions, neighbors of common illusions that should not elicit processing errors. These include such things as perfectly reasonable ducks, crooked lines that truly are crooked, circles that seem to have different sizes because they are, in fact, of different sizes, and so on. I show that many current vision language systems mistakenly see these illusion-illusions as illusions. I suggest that such failures are part of broader failures already discussed in the literature.

[–] [email protected] 13 points 1 month ago

Governments have criminalized the practice of managing your own health.

I have the feeling that they're not a British trans person talking about the NHS, or an American in a red state panicking about dying of sepsis because the baby they wanted so badly miscarried.

[–] [email protected] 15 points 1 month ago (1 children)

I must have been living under a rock/a different kind of terminally online, because I had only ever heard of Honey through Dan Olson's riposte to Doug Walker's The Wall, which describes Doug Walker delivering "an uncomfortably over-acted ad for online data harvesting scam Honey" (35:43).

view more: ‹ prev next ›