this post was submitted on 23 Nov 2023
50 points (89.1% liked)

Technology

34821 readers
16 users here now

This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.


Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.


Rules:

1: All Lemmy rules apply

2: Do not post low effort posts

3: NEVER post naziped*gore stuff

4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.

5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)

6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist

7: crypto related posts, unless essential, are disallowed

founded 5 years ago
MODERATORS
 

Ahead of OpenAI CEO Sam Altman’s four days in exile, several staff researchers wrote a letter to the board of directors warning of a powerful artificial intelligence discovery that they said could threaten humanity, two people familiar with the matter told Reuters.

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend's events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup's search for what's known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

Reuters could not independently verify the capabilities of Q* claimed by the researchers.

top 16 comments
sorted by: hot top controversial new old
[–] [email protected] 23 points 11 months ago (6 children)

Deep Mind is actually delivering shit like an estimate of the entire human proteome structure and creating the transcendently greatest go player of all time.

Meanwhile these chucklefucks are using the same electricity demand as Belgium to replicate a math solver that could probably be assigned as half-term project in an undergraduate class and are pissing themselves about threatening humanity.

The Valley has lost its goddamn mind.

[–] Redex68 5 points 11 months ago

You are missing the very crucial part about how this is generalised. That's like saying we don't need to teach math to people anymore, we have calculators now. The AI isn't too capable currently, but dismissing it would be like dismissing consumer PCs, because what are people gonna do with computers?

[–] [email protected] 3 points 11 months ago

Valley bullshit aside, I do have to defend the expensive exploration of the generalized AI space purely because it's embarassingly parallel. That is, it just gets so much better the more money and resources you throw at it. It couldn't solve math without a few million dollars worth of supercomputer training time. We didn't know it would create valid VHDL-to-csv-to-VBA scripts, but I got phind(.com) to make me one. And I certainly can't tell Wolfram Alpha to package the math solution it generated as a Javascript function.

[–] [email protected] 2 points 11 months ago

Deep Mind is actually delivering shit like an estimate of the entire human proteome structure and creating the transcendently greatest go player of all time.

Not to mention the huge advances in Chess AI. LeelaChessZero is the open-source implementation of the original AlphaZero idea Google came out with, and is rivaling Stockfish 15. Meanwhile, Torch is a new AI being developed that is now kicking Stockfish's ass.

Grandmasters and notices alike are learning a lot from chess AI, figuring out better ways to improve themselves, either by playing them outright, using them for post-game analysis, or watching two bots play and see the kind of creative strategies they can come up with.

[–] technojamin 1 points 11 months ago (1 children)

While I agree that a lot of the hype around AI goes overboard, you should probably read this recent paper about AI classification: https://arxiv.org/abs/2311.02462

Systems like DeepMind are narrow AI, whereas LLMs are general AI.

[–] [email protected] 1 points 11 months ago

Not really. The imementation of land is most the same, they just run continously on a per word (token) basis.

[–] [email protected] 1 points 11 months ago

Nicely put.

[–] [email protected] 11 points 11 months ago

What are they smoking at OpenAI? Can I get some?

[–] [email protected] 4 points 11 months ago (1 children)

Doesn't WolframAlpha already do this?

[–] NounsAndWords 12 points 11 months ago (2 children)

A calculator does most of it too, but this is a LLM that can do lots of other things also, which is a big piece of the "general" part of AGI.

Richard Feynman said “You have to keep a dozen of your favorite problems constantly present in your mind, although by and large they will lay in a dormant state. Every time you hear or read a new trick or a new result, test it against each of your twelve problems to see whether it helps. Every once in a while there will be a hit, and people will say, “How did he do it? He must be a genius!”

We are close to a point where a computer that can hold all the problems in its "head" can test all of them against all of the tricks. I don't know what math problems that starts to solve but I bet a few of them would be applicable to cryptology.

But then again, I have no idea what I'm talking about and just making bold guesses based on close to no information.

[–] [email protected] 1 points 11 months ago (1 children)

Even so, I think I'll hold off on calling anything AGI until it can at least solve simple calculus problems with a 90% success rate (reproducibly). I think that's a fair criteria, in my opinion.

[–] NounsAndWords 1 points 11 months ago

I'd say more than that. I don't think anyone is that close to AGI...yet

[–] [email protected] 0 points 11 months ago

And he said this in the 80s, when AI as we know it today was barely a concept.

[–] [email protected] 1 points 11 months ago

I think it's time to shut it down, hard. That's the start of something that will not end well for human beings.

[–] [email protected] 1 points 11 months ago

This is the best summary I could come up with:


Before his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board that led to Altman’s firing.

According to one of the sources, long-time executive Mira Murati told employees on Wednesday that a letter about the AI breakthrough called Q* (pronounced Q-Star), precipitated the board's actions.

The maker of ChatGPT had made progress on Q*, which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company.

Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.


The original article contains 293 words, the summary contains 169 words. Saved 42%. I'm a bot and I'm open source!