BrickedKeyboard

joined 1 year ago
[–] [email protected] -1 points 1 year ago (14 children)

Sure, but they were 4 function calculators a few months ago. The rate of progress seems insane.

[–] [email protected] -2 points 1 year ago (6 children)

My experience in research indicates to me that figuring shit out is hard and time consuming, and “intelligence” whatever that is has a lot less to do with it than having enough resources and luck. I’m not sure why some super smart digital mind would be able to do science much faster than humans.

That's right. Eliezer's LSD vision of the future where a smart enough AI just figures it all out with no new data is false.

However, you could...build a fuckton of robots. Have those robots do experiments for you. You decide on the experiments, probably using a procedural formula. For example you might try a million variations of wing design, or a million molecules that bind to a target protein, and so on. Humans already do this actually in those domains, this is just extending it.

[–] [email protected] -2 points 1 year ago (1 children)

It's 8 instances and the MoE architecture is a little more complex than that.

[–] [email protected] -3 points 1 year ago (17 children)

Just to engage with the high school bully analogy, the nerd has been threatening to show up with his sexbot bodyguards that are basically T-800s from terminator for years now, and you've been taking his lunch money and sneering. But now he's got real funding and he goes to work at a huge building and apparently there are prototypes of the exact thing he claims to build inside.

The prototypes suck...for now...

[–] [email protected] -1 points 1 year ago (1 children)

I keep seeing this idea that all GPT needs to be true AI is more permanence and (this is wild to me) a robotic body with which to interact with the world. if that’s it, why not try it out? you’ve got a selection of vector databases that’d work for permanence, and a big variety of cheap robotics kits that speak g-code, which is such a simple language I’m very certain GPT can handle it. what happens when you try this experiment?

??? I don't believe GPT-n is ready for direct robotics control at a human level because it was never trained on it, and you need to use a modification on transformers for the architecture, see https://www.deepmind.com/blog/rt-2-new-model-translates-vision-and-language-into-action . And a bunch of people have tried your experiment with some results https://github.com/GT-RIPL/Awesome-LLM-Robotics .

In addition to tinker with LLMs you need to be GPU-rich, or have the funding of about 250-500m. My employer does but I'm a cog in the machine. https://www.semianalysis.com/p/google-gemini-eats-the-world-gemini

What I think is the underlying technology that made GPT-4 possible can be made to drive robots to human level at some tasks, though if you note I think it may take to 2040 to be good, and that technology mostly just includes the idea of using lots of data, neural networks, and a mountain of GPUs.

Oh and RSI. That's the wildcard. This is where you automate AI research, including developing models that can drive a robot, using current AI as a seed. If that works, well. And yes there are papers where it does work. .

[–] [email protected] -3 points 1 year ago* (last edited 1 year ago) (3 children)

No literally the course material has the word "belief". It means "at this instant what is the estimate of ground truth".

Those shaky blue lines that show where your Tesla on autopilot thinks the lane is? That's it's belief.

English and software have lots of overloaded terms.

[–] [email protected] -2 points 1 year ago* (last edited 1 year ago) (23 children)

1, 2 : since you claim you can't measure this even as a thought experiment, there's nothing to discuss 3. I meant complex robotic systems able to mine minerals, truck the minerals to processing plants, maintain and operate the processing plants, load the next set of trucks, the trucks go to part assembly plants, inside the plant robots unload the trucks and feed the materials into CNC machines and mill the parts and robots inspect the output and pack it and more trucks...culminating in robots assembling new robots.

It is totally fine if some human labor hours are still required, this cheapens the cost of robots by a lot.

  1. This is deeply coupled to (3). If you have cheap robots, if an AI system can control a robot well enough to do the task as well as a human, obviously it's cheaper to have robots do the task than a human in most situations.

Regarding (3) : the specific mechanism would be AI that works like this:

Millions of hours of video of human workers doing tasks in the above domain + all video accessible to the AI company -> tokenized compressed description of the human actions -> llm like model. The llm like model thus is predicting "what would a human do". You then need a model to transform the what to robotic hardware that is built differently than humans, and this is called the "foundation model": you use reinforcement learning where actual or simulated robots let the AI system learn from millions of hours of practice to improve on the foundation model.

The long story short of all these tech bro terms is robotic generality - the model will be able to control a robot to do every easy or medium difficulty task, the same way it can solve every easy or medium homework problem. This is what lets you automate (3), because you don't need to do a lot of engineering work for a robot to do a million different jobs.

Multiple startups and deepmind are working on this.

[–] [email protected] 5 points 1 year ago* (last edited 1 year ago) (1 children)

I'm trying to find the twitter post where someone deepfakes eliezer's voice into saying full speed ahead on AI development, we need embodied catgirls pronto.

[–] [email protected] -1 points 1 year ago (4 children)

The one issue I have is that "what if some are their beliefs turn out to be real". How would it change things if Scientologists get a 2 way communication device, say they found it buried in Hubbard's backyard or whatever and it appears to be non human technology - and are able to talk to an entity who claims it is Xenu. Doesn't mean their cult religion is right but say the entity is obviously nonhuman, it rattles off the method to build devices current science knows no method to build and other people build the devices and they work and YOU can pay $480 a year and get FTL walkie talkies or some shit sent to your door. How does that change your beliefs?

[–] [email protected] -2 points 1 year ago

Personally I imagine him as a cult leader of a flying saucer cult where suddenly an alien vehicle is actually arriving. He's running around panicking tearing his hair out because this wasn't actually what he planned, he just wanted money and bitches as a cult leader. And because it's one thing to say the aliens will beam every cult member up and take them to paradise, but if you see a multi-kilometer alien vehicle getting closer to earth, whatever it's intentions are no one is going to be taken to paradise...

[–] [email protected] 0 points 1 year ago* (last edited 1 year ago) (3 children)

academic AI researchers have passed him by.

Just to be pedantic, it wasn't academic AI researchers. The current era of AI began here : https://www.npr.org/2012/06/26/155792609/a-massive-google-network-learns-to-identify

Academic AI researchers have never had the compute hardware to contribute to AI research since 2012, except some who worked at corporate giants (mostly deepmind) and went back into academia.

They are getting more hardware now, but the hardware required to be relevant and to develop a capability that commercial models don't already have keeps increasing. Table stakes are now something like 10,000 H100s, or about 250-500 million in hardware.

https://www.semianalysis.com/p/google-gemini-eats-the-world-gemini

I am not sure MIRI tried any meaningful computational experiments. They came up with unrunnable algorithms that theoretically might work but would need nearly infinite compute.

[–] [email protected] -3 points 1 year ago (6 children)

Software you write can have a "belief" as well. The course I took on it had us write Kalman filters, where you start with some estimate of a quantity. That estimate is your "belief", and you have a variance as well.

Each measurement you have a (value, variance) where the variance is derived from the quality of the sensor that produced it.

It's an overloaded word because humans are often unwilling to update their beliefs unless they are simple things, like "I believe the forks are in the drawer to the right of the sink". You believe that because you think you saw them their last. There is uncertainty - you might have misremembered, as your own memory is unreliable, your eyes are unreliable. If it's your kitchen and you've had thousands of observations, your belief has low uncertainty, if it's a new place your belief has high uncertainty.

If you go and look right now and the forks are in fact there you update your beliefs.

view more: ‹ prev next ›