BrickedKeyboard

joined 1 year ago
[–] [email protected] -1 points 1 year ago

answered above.

[–] [email protected] -1 points 1 year ago* (last edited 1 year ago) (8 children)

I wanted to know what you know and I don't. If rationalists are all scammers and not genuinely trying to be, per the name 'lesswrong' in their view of reality, what's your model of reality. What do you know? So far unfortunately I haven't seen anything. Sneer club's "reality model" seems to be "whatever the mainstream average person knows + 1 physicist", and it exists to make fun of the mistakes of rationalists and I assume ignores any successes if there are any.

Which is fine, I guess? Mainstream knowledge is probably usually correct. It's just that I already know it, there's nothing to be learned here.

[–] [email protected] -2 points 1 year ago (2 children)

This pattern shows up often when people are trying to criticize tesla or spaceX. And yeah, if you measure "current reality" vs "promises of their hype man/lead shitposter and internet troll", absolutely. Tesla probably will never achieve full self driving using anything like their current approach. But if you compare Tesla "to other automakers, "to most automakers that ever existed"", or SpaceX to "any rocket company since 1970" there's no comparison. If you're going to compare the internet to pre-internet, compare it to BBS you would access via modem or fax machines or libraries. No comparison.

Similarly you should compare GPT-4 and the next large model to be released, Gemini, vs all AI software for all time. It's no comparison.

[–] [email protected] -2 points 1 year ago

take some time and read this

I read it. I appreciated the point that human perception of current AI performance can scam us, though this is nothing new. People were fooled by Eliza.

It's a weak argument though. For causing an AI singularity, functional intelligence is the relevant parameter. Functional intelligence just means "if the machine is given a task, what is the probability it completes the task successfully". Theoretically an infinite chinese room can have functional intelligence (the machine just looks up the sequence of steps for any given task).

People have benchmarked GPT-4 and it's got general functional intelligence at tasks that can be done on a computer. You can also just go pay up $20 a month and try it. It's below human level overall I think, but still surprisingly strong given it's emergent behavior from computing tokens.

[–] [email protected] -1 points 1 year ago* (last edited 1 year ago) (21 children)

I appreciated this post because it never occurred to me that the "thumb might be on the scales" for the "rules for discourse" that seems to be the norm around the rat forms. I personally ignore most of it, however, the "ES" rat phrase is simply saying, "I know we humans are biased observers, this is where I'm coming from". If the topic were renewable energy and I was the 'head of extraction at BP', you can expect that whatever I have to say is probably biased against renewable energy.

My other thought reading this was : what about the truth. Maybe the mainstream is correct about everything. "Sneer club" seems to be mostly mainstream opinions. That's fine I guess but the mainstream is sometimes wrong about issues that have been poorly examined or near future events. The collective opinions of everyone don't really price in things that are about to happen, even if it's obvious to experts. For example, the mainstream opinion on covid was usually lagging several weeks behind Zvi's posts on lesswrong.

Where I am going with this is you can point out bad arguments on my part, but I mean in the end, does truth matter? Like are we here to score points on each other or share what we think reality is or will in the very near future be?

[–] [email protected] -2 points 1 year ago

To be clear, maybe you will be unimpressed with this, scale matters. I said in the above text "10 times current industrial output. Within 17 years RMR, robots making robots.". If you already priced that in, ok, that's an acceptable position, but the magnitude of a singularity matters, not just that it's happening.

[–] [email protected] -1 points 1 year ago (1 children)

And just to be clear, for one to be "lost in the AI religion", the claims have to be false, correct? We will not see the things I mentioned within the timeframe I gave (7 years, 17 years, and implicitly if there is not immediate progress towards the nearer deadline within 1 year it's not going to happen).

Google's Gemini will not be multimodal, be capable of learning to do tasks by reinforcement learning to human level, right? Robotics foundation models will not work.

[–] [email protected] 0 points 1 year ago (9 children)

Real talk: a real doll with the brain of a calculator would be a substantial product improvement.

[–] [email protected] -2 points 1 year ago (7 children)

Serious answer not from yudnowsky: the AI doesn't do any of that. It helps people cheat on their homework, write their code and form letters faster, and brings in revenue. AI owner uses the revenue and buys gpus. With the GPUs they make the AI better. Now it can do a bit more than before and then they buy more GPUs and theoretically this continues until the list of tasks the AI can do includes "most of the labor in a chip fab" and GPUs become cheap and then things start to get crazy.

Same elementary school logic but I mean this is how a nuke works.

[–] [email protected] -5 points 1 year ago (5 children)

They also hyped autonomous cars and the Internet itself including streaming video for years before it was practical. Your filter of "it's all hype" only works 99 percent of the time.

[–] [email protected] -1 points 1 year ago (17 children)

Just I think to summarize your beliefs: rationalists are wrong about a lot of things and assholes. And also the singularity (which predates yuds existence) is not in fact possible by the mechanism I outlined.

I think this is a big crux here. It's one thing if its a cult around a false belief. It's kind of a problem to sneer at a cult if the core S of it happens to be a true law of nature.

Or an analogy. I think gpt-4 is like the data from the Chicago pile. That data was enough to convince the domain experts then a nuke was going to work to the point they didn't test Fat Man, you believe not. Clearly machine generality is possible, clearly it can solve every problem you named including, with the help of humans, ordering every part off digikey and loading the pick and place and inspecting the boards and building the wire harnesses and so on.

[–] [email protected] -2 points 1 year ago* (last edited 1 year ago) (6 children)

Just to be clear, you can build your own telescope now and see the incoming spacecraft.

Right now you can go task GPT-4 with solving a problem about equal to undergrad physics, let it use plugins, and it will generally get it done. It's real.

Maybe this is the end of the improvements, just like maybe the aliens will not actually enter orbit around earth.

view more: ‹ prev next ›