List of discussions between Eliezer Yudkowsky and Paul Christiano

This is a list of discussions between Eliezer Yudkowsky and Paul Christiano.

Start date End date Venue Thread title Topics covered Summary
2010-12-18 LessWrong “Cryptographic Boxes for Unfriendly AI”
2010-12-20 LessWrong “What can you do with an Unfriendly AI?”
2010-12-22 LessWrong “Motivating Optimization Processes”
2012-02-26 LessWrong “The mathematics of reduced impact: help needed”
2013-06-12 LessWrong “Do Earths with slower economic growth have a better chance at FAI?”
2013-06-13 LessWrong “After critical event W happens, they still won’t believe you”
2014-11-18 2014-11-20 Intelligent Agent Foundations Forum “I’ll very quickly remark that I think that the competence gap is indeed the main issue …”
2015-06-17 2015-12-29 Arbital “Mindcrime”
2015-06-18 2015-06-18 Arbital “Diamond maximizer”
2015-06-18 2015-06-18 Arbital “Identifying ambiguous inductions”
2015-06-18 2015-06-18 Arbital “Patch resistance”
2015-06-18 2015-06-18 Arbital “Relevant limited AI”
2015-06-18 2015-06-18 Arbital “Zermelo-Fraenkel provability oracle”
2015-06-18 2015-07-14 Arbital “Complexity of value”
2015-06-18 2015-07-14 Arbital “Omnipotence test for AI safety”
2015-06-18 2015-12-27 Arbital “Ontology identification problem”
2015-06-18 2015-12-27 Arbital “Ontology identification problem”
2015-06-18 2016-03-22 Arbital “Nearest unblocked strategy”
2015-06-19 2015-12-29 Arbital “Distant superintelligences can coerce the most probable environment of your AI”
2015-11-10 2017-11-11 Facebook “Want to avoid going down an awful lot of blind alleys in AI safety? Here’s a general heuristic …”
2015-12-03 2015-12-06 Medium “On heterogeneous objectives”
2015-12-27 2015-12-27 Arbital “Behaviorist genie”
2015-12-27 2015-12-29 Arbital “Orthogonality Thesis”
2015-12-27 2016-04-21 Arbital “AI safety mindset”
2015-12-29 2015-12-29 Arbital “Autonomous AGI”
2015-12-29 2015-12-29 Arbital “Modeling distant superintelligences”
2015-12-29 2016-01-03 Arbital “Known-algorithm non-self-improving agent”
2015-12-29 Arbital “Task-directed AGI”
2016-01-01 2016-01-01 Arbital “Advanced agent properties”
2016-01-30 2016-01-30 Arbital “Natural language understanding of ‘right’ will yield normativity”
2016-02-25 2016-02-29 Arbital “Epistemic and instrumental efficiency”
2016-03-09 Arbital “Reflectively consistent degree of freedom”
2016-03-11 2016-03-13 Facebook “(Long.) As I post this, AlphaGo seems almost sure to win the third game and the match …”
2016-03-16 Arbital “Open subproblems in aligning a Task-based AGI”
2016-03-19 2016-03-19 Arbital “Low impact”
2016-03-26 2016-03-26 Arbital “Informed oversight”
2016-03-29 2016-03-29 Facebook “Paul Christiano, someone wrote a story about approval-directed agents! …”
2016-04-15 2016-04-17 Arbital “Faithful simulation”
2016-04-15 2016-04-21 Arbital “Goal-concept identification”
2016-04-29 2016-06-06 Arbital “Coherent extrapolated volition (alignment target)”
2016-05-17 2016-05-18 Arbital “Show me what you’ve broken”
2016-10-21 2016-10-21 Facebook “What people discuss at AI ethics conferences: How we can possibly convey all the deep subtleties of human morality …”
2017-01-17 2017-01-17 Facebook “I am concerned about the number of people I’ve heard joking about Trump’s election being evidence for the Simulation Hypothesis …”
2017-10-19 2017-10-19 Facebook “AlphaGo Zero uses 4 TPUs, is built entirely out of neural nets with no handcrafted features …”
2017-12-09 2017-12-11 Facebook “Max Tegmark put it well, on Twitter: The big deal about Alpha Zero isn’t …”

See also

External links