this post was submitted on 21 Jun 2024
50 points (81.2% liked)

Technology

55562 readers
4074 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

A month after he left OpenAI amid disagreements regarding the safety of the company's products, Dr. Ilya Sutskever announced a new venture called Safe Superintelligence (SSI). “Building safe superintelligence (SSI) is the most important technical problem of our​​ time,” read the new company's announcement also signed by fellow co-founders Daniel Gross and Daniel Levy. “We have started the world’s first straight-shot SSI lab, with one goal and one product: a safe superintelligence. It’s called Safe Superintelligence. SSI is our mission, our name, and our entire product roadmap, because it is our sole focus. Our team, investors, and business model are all aligned to achieve SSI.”

The founders of SSI have deep ties to Israel. Sutskever (37) was born in the USSR before immigrating to Jerusalem at the age of 5. He began his academic studies at the Open University but completed all his degrees at the University of Toronto, where he earned a doctorate in machine learning under the guidance of Prof. Geoffrey Hinton, one of the early pioneers in the field of artificial intelligence (AI).

top 7 comments
sorted by: hot top controversial new old
[–] NevermindNoMind 23 points 6 days ago (2 children)

While I appreciate the focus and mission, kind of I guess, your really going to set up shop in a country literally using AI to identify air strike targets and handing over to the Ai the decision making over whether the anticipated civilian casualties are proportionate. https://www.theguardian.com/world/2024/apr/03/israel-gaza-ai-database-hamas-airstrikes

And Isreal is pretty authorarian, given recent actions against their supreme court and banning journalists (Al jazera was outlawed, the associated press had cameras confiscated for sharing images with Al jazera, oh and the offices of both have been targeted in Gaza), you really think the right wing Israeli government isn't going to coopt your "safe superai" for their own purposes?

Oh, then there is the whole genocide thing. Your claims about concerns for the safety of humanity ring a little more than hollow when you set up shop in a country actively committing genocide, or at the very least engaged in war crimes and crimes against humanity as determined by like every NGO and international body that exists.

So Ilya is a shit head is my takeaway.

[–] [email protected] 7 points 6 days ago

safe /for us/ super AI

[–] slurpinderpin -2 points 6 days ago

Al Jazeera lmao.

[–] [email protected] 21 points 1 week ago

A grifter for a grifted land.

[–] db2 13 points 1 week ago
[–] [email protected] 10 points 1 week ago (1 children)

Hairline gore. Just take it all off bro.

[–] Linkerbaan 12 points 1 week ago* (last edited 1 week ago)

Man's in need of that Apartheid transplant