Robin Hanson’s views on AI safety
| Topic | View |
|---|---|
| AI timelines | |
| Value of decision theory work | |
| Value of highly reliable agent design work | |
| Difficulty of AI alignment | |
| Shape of takeoff/discontinuities in progress | |
| Type of AI safety work most endorsed | |
| How “prosaic” AI will be | |
| Kind of AGI we will have first (de novo, neuromorphic, WBE, etc.) |

