I guess that means AI is now on the same level as an octopus.
mii
This is brilliant and I’m saving it and will post a link to it the next time someone at work asks why we can’t “just use AI to do it” when a ticket gets rejected for being stupid and/or unreasonable.
However:
The first is that we have some sort of intelligence explosion, where AI recursively self-improves itself, and we're all harvested for our constituent atoms […]. It may surprise some readers that I am open to the possibility of this happening, but I have always found the arguments reasonably sound.
Yeah, I gotta admit, I am surprised. Because I have not found a single reasonable argument for this horseshit and the rest of the article (as well as the others I read from their blog) does not read like it’s been written by someone who’d buy into AI foom.
Seriously, could someone gift this dude a subscription to spicyautocompletegirlfriends.ai so he can finally cum?
One thing that's crazy: it's not just skeptics, virtually EVERYONE in AI has a terrible track record - and all in the same OPPOSITE direction from usual! In every other industry, due to the Planning Fallacy etc, people predict things will take 2 years, but they actually take 10 years. In AI, people predict 10 years, then it happens in 2!
ai_quotes_from_1965.txt
Microsoft announced that 2024 will be the era of the AI PC, and unveiled that upcoming Windows PCs would ship with a dedicated Copilot button on the keyboard.
Tell me they're desperate because not many people use that shit without telling me they're desperate because not many people use that shit.
Do they think intelligence is limited to speaking fancy, and/or having the tools to solve specific problems?
Yes. They literally think that. I mean, why else would they assume a spicy text extruder with a built-in thesaurus is so smart?
Oh shit, I remember the Musk namedrop in Discovery. Didn’t they name him alongside historical scientists and inventors? I seldom feel actual cringe but that was actually embarrassing.
There’s a giant overlap between Christian fundamentalism and the whole singularity shtick, and Yud’s whole show is really the technological version of Christian futurist eschatology (i.e. the belief that the Book of Revelations etc. are literal depictions of the future).
Cory Doctorow and Charlie Stross call it Rapture of the Nerds.
I bet that guy watches anime on 2x speed and skips parts without dialog to feel more “productive”.
Proof that pasta doesn't exist:
- An atom isn't pasta.
- Adding an atom to something that isn't pasta doesn't make it pasta.
- Therefore nothing is pasta.
Oh look, Elon openly snuggling up to Nazis and "just asking questions". As if I didn't hate this clown enough.
(For anyone out of the loop: the AfD is a far-right political party in Germany and the spiritual successor to the NSDAP. They're praising the SS, advocate for legalization of holocaust denial and historical revisionism, removal of hate crimes from the code of law, and more. They're so openly Nazis that they got kicked out of the EU parliament's far-right ID coalition for being too fucking Nazi. There's no leeway. They're literal card-carrying national socialists.)
I'm reading Feynman's lectures on electromagnetism right now, and GPT-4o can answer questions and help me with the math. I doubt that even a smart high school would be able to do it.
Ten bucks this guy hasn’t double-checked anything his chatbot told him but accepted it as truth because it used big words in grammatically coherent ways.
My favorite bullshit test is to have the AI count letters. Something as simple as "How many A's are in this sentence?" will have the thing give you a seemingly random number (which might sometimes even be correct), but as soon as you ask it to list them, it will have a stroke and just make up stuff that makes no sense.
I can't be arsed to make an account with OpenAI or whatever, so here's an example with Bing Chat, but this has worked on every single chatbot I've tested it on.
Another fun test (as suggested by @flizzo last time I brought this up) is to have the chatbot attempt to crack a simple code. Just Caesar cipher something and ask it to crack it. Hilarity will ensue. You can even ask it to encrypt it first and then to crack it for extra fun.
Okay, first of all, it dropped a letter. And the shift isn't even correct, lol. It said Y becomes F and then does whatever the fuck.
Okay, so let's give it an easy example, and even tell it the shift. Let's see how that works.
This shit doesn't even produce one correct message. Internal state or not, it should at least be able to read the prompt correctly and then produce an answer based on that. I mean, the DuckDuckGo search field can fucking do it!