AI Safety

84 readers
1 users here now

A place to discuss AI Safety, both reducing short term harm and preventing misaligned AGI.

We need more users to share content here so that the AI Safety community can grow! If you have anything to share about AI alignment, please drop a post :) We appreciate you!

founded 2 years ago
MODERATORS
1
2
3
4
5
2
submitted 1 year ago by luka to c/aisafety
6
7
8
9
10
11
 
 

This paper provides an overview of the main sources of catastrophic AI risks, organized into four categories: Malicious Use; AI Race; Organizational Risks; and Rogue AIs. (PDF can be downloaded from the linked arxiv.org page.)

12
13
5
submitted 2 years ago* (last edited 2 years ago) by luka to c/aisafety
14
2
submitted 2 years ago* (last edited 2 years ago) by luka to c/aisafety
15
 
 

Debating the proposition "AI research and development poses an existential threat"! Witness incredible feats of mental gymnastics and denialism! Gaze in dumbstruck awe as Yann LeCun suggests there is no need to worry because if and when AI starts to look dangerous we simply won't build it! Feel your jaw hit the floor as Melanie Mitchell argues that of course ASI is not an X-risk, because if such a thing could exist, it would certainly be smart enough to know not to do something we don't want it to do! A splendid time is guaranteed for all.

16
 
 

Pretty solid interview with Connor Leahy about why AI Safety/Alignment is important, how transhumanists are caught in an ideological race, and some ideas for possible societal solutions/regulations.

I think this might be Leahy's best yet.

17
 
 
18
2
submitted 2 years ago by luka to c/aisafety
 
 

This video by Robert Miles makes a levelheaded argument for taking existential AI Safety seriously as rapidly as possible.