titotal

joined 11 months ago
[–] [email protected] 24 points 2 months ago (7 children)

Oxford instituted a fundraising freeze. They knew the org could have gotten oodles funding from any number of strange tech people, they disliked it so much they didn't care.

[–] [email protected] 16 points 2 months ago (3 children)

Fun revelations that SBF was going to try and invest in Elon buying twitter because he thought it would make money (lol), and was seriously proposing "put twitter on the blockchain" as his pitch. One of the dumbest ideas I've ever heard, right behind every other "X on blockchain" proposal

[–] [email protected] 14 points 2 months ago (1 children)

I'm sure they could have found someone in the EA ecoystem to throw them money if it weren't for the fundraising freeze. This seems like a case of Oxford killing the institute deliberately. The 2020 freeze predates the Bostrom email, this guy who was consulted by oxford said there was a dysfunctional relationship for many years.

It's not like oxford is hurting for money, they probably just decided FHI was too much of a pain to work with and hurt the oxford brand.

[–] [email protected] 7 points 2 months ago (1 children)

I feel this makes it an unlikely great filter though. Surely some aliens would be less stupid than humanity?

Or they could be on a planet with far less fossil fuels reserves, so they don't have the opportunity to kill themselves.

[–] [email protected] 22 points 2 months ago (2 children)

I feel really bad for the person behind the "notkilleveryonism" account. They've been completely taken in by AI doomerism and are clearly terrified by it. They'll either be terrified for their entire life even as the predicted doom fails to appear, or realise at some point that they wasted an entire portion of their life and their entire system of belief is a lie.

False doomerism is really harming people, and that sucks.

[–] [email protected] 8 points 2 months ago

Yeah, the fermi paradox really doesn't work here, an AI that was motivated and smart enough to wipe out humanity would be unlikely to just immediately off itself. Most of the doomerism relies on "tile the universe" scenarios, which would be extremely noticeable.

[–] [email protected] 18 points 2 months ago* (last edited 2 months ago) (2 children)

The future of humanity institute is the EA longtermist organisation at oxford run by swedish philosopher Nick Bostrom, who got in trouble for an old racist email and subsequent bad apology. It is the one that is rumored to be shutting down.

The Future of Life institute is the EA longtermist organisation run by swedish physicist Max Tegmarck, who got in trouble for offering to fund a neo-nazi newspaper (He didn't actually go through with it and claimed ignorance). It is the one that got the half a billion dollar windfall.

I can't imagine how you managed to conflate these two highly different institutions.

[–] [email protected] 3 points 3 months ago (5 children)

I'm not a stock person man, but didn't the hype from bitcoin last like a decade, despite not having a single widespread use case? Why wouldn't LLM hype last the same amount of time, when people actually use it for things?

[–] [email protected] 13 points 3 months ago (3 children)

The committed Rationalists often point out the flaws in science as currently practiced: the p-hacking, the financial incentives, etc. Feeding them more data about where science goes awry will only make them more smug.

The real problem with the Rationalists is that they* think they can do better*, that knowing a few cognitive fallacies and logicaltricks will make you better than the doctors at medicine, better than the quantum physicists at quantum physics, etc.

We need to explain that yes, science has it's flaws, but it still shits all over pseudobayesianism.

[–] [email protected] 15 points 3 months ago (1 children)

This definitely reads like the tedious "april fools" posts where you can tell they are actually 90% serious but want the cover of a joke.

[–] [email protected] 12 points 3 months ago (3 children)

To be honest, I'm just kinda annoyed that he ended on the story about his mate Aaron who went on surfing trips to indonesia and gave money to his new poor village friends. The author says aaron is "accountable" to the village, but that's not true, because Aaron is a comparatively rich first world academic that can go home at any time. Is Aaron "shifting power" to the village? No, because they if they don't treat him well, he'll stop coming to the village and stop funding their water supply upgrades. And he personally benefits with praise and friendship from his purchases.

I'm sure Aaron is a fine guy, and I'm not saying he shouldn't give money to his village mates, but this is not a good model for philanthropy! I would argue that a software developer who just donates a bunch of money unconditionally to the village (via givedirectly or something) is arguably more noble than Aaron here, donating without any personal benefit or feel good surfer energy.

[–] [email protected] 11 points 3 months ago (12 children)

I enjoyed the takedowns (wow, this guy really hates Macaskill), but the overall conclusions of the article seem a bit lost. If malaria nets are like a medicine with side-effects, then the solution is not to throw away the medicine. (Giving away free nets to people probably does not have a signficant death toll!). At the end they seem to suggest, like, voluntourism as the preferred alternative? I don't think Africa needs to be flooded with dorky software engineers personally going to villages to "help out".

 

Brain genius Beff Jezos manages to butcher both philosophy and physics at the same time!

view more: next ›