And, of course, we know that once one company uses those chats, it becomes literally impossible for anyone else to use them.
ShakingMyHead
Investors demand growth. The problem is that Microsoft has basically won Capitalism and has no real area to grow to.
I don't think they're planning to build the 5GW datacentres on TMI, though.
The theoretical Three Mile Island datacentre is probably the end-point of all of this. I think that what they build there (should it actually happen) has to be AGI. Like, unambiguously AGI. Where even the most hardline AI skeptics will react with "Holy shit they actually did it."
Anything less and it'll be nearly impossible to justify building 5 nuclear reactors to just power one of these datacentres.
So even if it is happening, it's still not happening.
Building 5-7 5GW facilities full of GPUs is going to be an extremely large amount of silicon. Not to mention the 25-35 nuclear power plants they apparently want to build to power them.
So on the list of things not happening...
I believe the future is going to be so bright that no one can do it justice by trying to write about it now.
Uh
Finally, of course, it is very much not just rationalists who believe that AI represents an existential risk. We just got there twenty years early.
This one?
We could also just fluoridate the water supply, which also massively reduces cavities.
Obligatory note that, speaking as a rationalist-tribe member, to a first approximation nobody in the community is actually interested in the Basilisk and hasn’t been for at least a decade.
Sure, but that doesn't change that the head EA guy wrote an OP-Ed for Time magazine that a nuclear holocaust is preferable to a world that has GPT-5 in it.
I think what it actually clarified is that personal information generated from an AI model are now covered under the law, instead of just what is used as training data.