Eliezer Yudkowsky’s views on AI safety
http://lesswrong.com/lw/gfb/update_on_kim_suozzi_cancer_patient_in_want_of/8bv0
Topic | View |
---|---|
AI timelines | see here for 2083 as Carl’s estimate, and in IEM Eliezer says “I’m currently trying to sort out with Carl Shulman why my median is forty-five years in advance of his median” (p. 83) so that puts Eliezer’s estimate as of 2013 at around 2038. See also this thread and this tweet and this comment. |
Value of decision theory work | |
Value of highly reliable agent design work | |
Difficulty of AI alignment | See this tweet. |
Shape of takeoff/discontinuities in progress | |
Type of AI safety work most endorsed | |
How “prosaic” AI will be | |
Kind of AGI we will have first (de novo, neuromorphic, WBE, etc.) | |
Difficulty of philosophy | here is one remark |