GorillasAreForEating

joined 1 year ago
 

The article doesn't mention SSC directly, but I think it's pretty obvious where this guy is getting his ideas

[–] [email protected] 44 points 3 months ago (3 children)

When they made an alt-right equivalent of Patreon they called it "Hatreon". This stuff is like a game to them.

[–] [email protected] 3 points 3 months ago

I'd say it's half-serious satire; she did end up dating a "high net-worth individual", after all.

[–] [email protected] 6 points 3 months ago* (last edited 3 months ago)

If Caroline Ellison hadn't been in an actual relationship with a "high net-worth individual" I would have said it was just straightforward satire, but given the context I think she's using mask of irony to pretend she isn't revealing her true self.

Her words may be satirical, but her actions were more like "this but unironically".

 

An old post from Caroline Ellison's tumblr, since deleted.

[–] [email protected] 15 points 3 months ago* (last edited 3 months ago)

Here is the document that mentions EA as risk factor, some quotes below

Fourth, the defendant may feel compelled to do this fraud again, or a version of it, based on his use of idiosyncratic, and ultimately for him pernicious, beliefs around altruism, utilitarianism, and expected value to place himself outside of the bounds of the law that apply to others, and to justify unlawful, selfish, and harmful conduct. Time and time again the defendant has expressed that his preferred path is the one that maximizes his version of societal value, even if imposes substantial short term harm or carries substantial risks to others... In this case, the defendant’s professed philosophy has served to rationalize a dangerous brand of megalomania—one where the defendant is convinced that he is above the law and the rules of the road that apply to everyone else, who he necessarily deems inferior in brainpower, skill, and analytical reasoning.

[–] [email protected] 5 points 6 months ago* (last edited 6 months ago)

"post-rationalists", essentially just rationalists who reject Yudkowksy's anti-woo stance

[–] [email protected] 5 points 6 months ago

I should probably confess I didn't actually read the longer ones all the way through, for the reason you just demonstrated.

[–] [email protected] 8 points 7 months ago

Yeah, seems doubtful they'd get along though I imagine both groups were present based on what I know of the rationalists.

Also lol at the gender ratio.

[–] [email protected] 7 points 7 months ago* (last edited 7 months ago) (1 children)

I should have said at least two sex offenders.

In other threads people are saying the victim was his 9 year old stepdaughter, but who knows?

[–] [email protected] 7 points 7 months ago* (last edited 7 months ago) (17 children)

Apparently Aella and Robin Hanson showed up at vibecamp too, I guess it's a big deal for the greater rationalist community.

[–] [email protected] 8 points 7 months ago (3 children)

I apologize if you saw my post before the current edit, I was similarly confused.

To clarify, there are actually two sex offenders, Brent Dill ( @HephaistosF) and a friend of his, @chaosprime (haven't discovered the real name yet). Someone recently posted @chaosprime's criminal record showing that he was convicted of sexual assault in 2000.

[–] [email protected] 6 points 7 months ago

SEO will pillage the commons.

My personal conspiracy theory (not sure if I actually believe this yet):

The idea that people would use generative AI to make SEO easier (and thus make search engine results worse) was not an unfortunate side effect of generative AI, it was the entire purpose. It's no coincidence that OpenAI teamed up with Google's biggest rival in search engines; we're now seeing an arms race between tech giants using spambot generators to overwhelm the enemy's filters.

The decision to make chatGPT public was not about concern for openness (if it was they would have made the earlier versions of GPT public too), it's more that they had a business partner lined up and Google search had become enshittified enough that they thought they could pull off a successful "disruption".

 

I somehow missed this one until now. Apparently it was once mentioned in the comments on the old sneerclub but I don't think it got a proper post, and I think it deserves one.

 

From Sam Altman's blog, pre-OpenAI

 

Image taken from this tweet: https://twitter.com/softminus/status/1732597516594462840

post title was this response: https://twitter.com/QuintusActual/status/1732615870613258694

Sadly the article is behind a paywall and I am loath to give Scott my money

 

I was wondering if someone here has a better idea of how EA developed in its early days than I do.

Judging by the link I posted, it seems like Yudkowsky used the term "effective altruist" years before Will MacAskill or Peter Singer adopted it. The link doesn't mention this explicitly, but Will MacAskill was also a lesswrong user, so it seems at least plausible that Yudkowsky is the true father of the movement.

I want to sort this out because I've noticed that a recently lot of EAs have been downplaying the AI and longtermist elements within the movement and talking more about Peter Singer as the movement's founder. By contrast the impression I get about EA's founding based on what I know is that EA started with Yudkowsky and then MacAskill, with Peter Singer only getting involved later. Is my impression mistaken?

 

At various points, on Twitter, Jezos has defined effective accelerationism as “a memetic optimism virus,” “a meta-religion,” “a hypercognitive biohack,” “a form of spirituality,” and “not a cult.” ...

When he’s not tweeting about e/acc, Verdon runs Extropic, which he started in 2022. Some of his startup capital came from a side NFT business, which he started while still working at Google’s moonshot lab X. The project began as an April Fools joke, but when it started making real money, he kept going: “It's like it was meta-ironic and then became post-ironic.” ...

On Twitter, Jezos described the company as an “AI Manhattan Project” and once quipped, “If you knew what I was building, you’d try to ban it.”

 

Most of the article is well-trodden ground if you've been following OpenAI at all, but I thought this part was noteworthy:

Some members of the OpenAI board had found Altman an unnervingly slippery operator. For example, earlier this fall he’d confronted one member, Helen Toner, a director at the Center for Security and Emerging Technology, at Georgetown University, for co-writing a paper that seemingly criticized OpenAI for “stoking the flames of AI hype.” Toner had defended herself (though she later apologized to the board for not anticipating how the paper might be perceived). Altman began approaching other board members, individually, about replacing her. When these members compared notes about the conversations, some felt that Altman had misrepresented them as supporting Toner’s removal. “He’d play them off against each other by lying about what other people thought,” the person familiar with the board’s discussions told me. “Things like that had been happening for years."

 

non-paywall archived version here: https://archive.is/ztech

view more: next ›