Carl Shulman’s views on AI safety

Topic View
AI timelines see this comment
Value of highly reliable agent design (e.g. decision theory, logical uncertainty) work
Value of intelligence amplification work See comments like this one
Value of pushing for whole brain emulation See this report. He gives some points against in a comment starting with “However, the conclusion that accelerating WBE (presumably via scanning or neuroscience, not speeding up Moore’s Law type trends in hardware) is the best marginal project for existential risk reduction is much less clear.”1 See also this comment and this one. “The type of AI technology: whole brain emulation looks like it could be relatively less difficult to control initially by solving social coordination problems, without developing new technology, while de novo AGI architectures may vary hugely in the difficulty of specifying decision algorithms with needed precision”.2
Difficulty of AI alignment
Shape of takeoff/discontinuities in progress
Type of AI safety work most endorsed
How “prosaic” AI will be
How well we need to understand philosophy before building AGI Some discussion in this thread, this comment.
Kind of AGI we will have first (de novo, neuromorphic, WBE, etc.) See this comment, this comment, this comment. Also this comment is on this topic, but not Carl’s views necessarily.