this post was submitted on 23 Nov 2023
74 points (84.3% liked)

Technology

59210 readers
4221 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
top 15 comments
sorted by: hot top controversial new old
[–] atx_aquarian 49 points 11 months ago* (last edited 11 months ago) (2 children)

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company.

Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

I'm really starting to agree that all this drama is clever marketing to sell a neat--but not bombshell--thing that would otherwise not be a real product.

[–] Clent 25 points 11 months ago (1 children)

Fear of missing out is driving this entire market segment.

No one is sure what they might miss out on but that's not going to stop the fear of investors.

Real estate's gone to shit so they need to find some new place to move their money.

[–] [email protected] 3 points 11 months ago

It's hard to deal with at work right now. Every client is demanding AI as if it's some robot brain we can just plug in and it will do whatever they want.

[–] [email protected] 5 points 11 months ago

It’s crypto all over again, but with a less-useless technology underpinning it. Seriously, a computer doing grade school arithmetic is what will threaten humanity? I’m sure it’s interesting from a research perspective how that math is being done, but math is the easiest thing for a computer to do.

[–] ShittyBeatlesFCPres 13 points 11 months ago

The media really needs to stop doing the tech world’s advertising for them. They aren’t wrestling with ethics about making fucking Sci Fi movie A.I. that turns on humanity. There’s academic researchers who genuinely care about the real ethics of generative A.I. but the investors and leadership don’t.

Silicon Valley doesn’t even need sentient A.G.I. to create fresh dystopias. Microsoft already sells surveillance A.I. tech to I.C.E. They probably ran with GitHub’s “copilot” branding after creaming their pants imagining military contracts.

[–] MataVatnik 9 points 11 months ago (2 children)

Until these machines have consciousness and a vision we will not see the end days

[–] [email protected] 4 points 11 months ago

James Cameron: hold my water.

[–] [email protected] 3 points 11 months ago (1 children)

I'm baffled by people that are freaking out about this. We're doing a great job all on our own of bringing on the end days.

[–] MataVatnik 2 points 11 months ago

Exactly, we are already living in a dystopia

[–] [email protected] 7 points 11 months ago

This is the best summary I could come up with:


Before his triumphant return late Tuesday, more than 700 employees had threatened to quit and join backer Microsoft (MSFT.O) in solidarity with their fired leader.

The sources cited the letter as one factor among a longer list of grievances by the board that led to Altman’s firing.

According to one of the sources, long-time executive Mira Murati told employees on Wednesday that a letter about the AI breakthrough called Q* (pronounced Q-Star), precipitated the board's actions.

The maker of ChatGPT had made progress on Q*, which some internally believe could be a breakthrough in the startup's search for superintelligence, also known as artificial general intelligence (AGI), one of the people told Reuters.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because they were not authorized to speak on behalf of the company.

Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.


The original article contains 293 words, the summary contains 169 words. Saved 42%. I'm a bot and I'm open source!

[–] [email protected] 4 points 11 months ago (1 children)

They fired him because they made an AI that could do some basic maths. Am I reading that right?

[–] dustyData 3 points 11 months ago (1 children)

I get the hype because it is a big deal in machine learning. However, it's a gross over reaction. This people are high on their own farts.

[–] [email protected] 0 points 11 months ago (2 children)

Oh yeah it's undoubtedly an advancement but as you said, to fire someone over that is an overreaction and that's putting it generously.

I understand taking some form of action if there was evidence of something potentially catastrophic, or at least 1 step removed from that. Do they have public guidelines on where they draw the line? What's the procedure that says "at this point we pull the plug?". I highly doubt it says "when it can do basic math".

[–] dustyData 5 points 11 months ago* (last edited 11 months ago)

Do they have public guidelines on where they draw the line?

This is part of the problem. The non-profit supervising a for-profit model clearly doesn't work. Altman either intentionally or opportunistically took this chance to essentially coup his own board. The other side of the board's supposed reasons to oust him was that there are no guidelines, there are no safe-guards, they're all flying by the seats of their pants and being reckless and destructive all around. Many AI companies are already starting to see lawsuits due to unforeseen, to them, damages.

[–] tinkeringidiot 1 points 11 months ago

An overreaction by members of the board that wanted to keep AI development slow and “safe”. Sudden news that there was a major advancement toward AGI (which they believe will destroy humanity, there’s a seriously a whole cult around this in AI research circles right now) that they hadn’t been told about sent them off the deep end. Those board members thought they could fire Altman and throw the brakes on, not anticipating that 700 employees would side against them and potentially migrate to Microsoft where the “AI ethics” would have no influence at all.

They shot their shot and lost massively, for themselves and their fellow believers. That attitude toward AI is now being labeled a business liability in the minds of every decision maker in the whole AI world.