AI timelines and cause prioritization

In a 2013 comment Eliezer Yudkowsky said:

Median doom time toward the end of the century? That seems enormously optimistic. If I believed this I’d breathe a huge sigh of relief, upgrade my cryonics coverage, spend almost all current time and funding trying to launch CFAR, and write a whole lot more about the importance of avoiding biocatastrophes and moderating global warming and so on. I might still work on FAI due to comparative advantage, but I’d be writing mostly with an eye to my successors. But it just doesn’t seem like ninety more years out is a reasonable median estimate. I’d expect bloody uploads before 2100.

From a 2014 conversation between Luke Muehlhauser and Jacob Steinhardt:

Jacob: But I think the academic’s day-to-day research doesn’t depend much on AI timelines. Whether AI is 30 or 100 years away, there are problems in front of you, and you just want to push on those.

Luke: What? Your AI timelines estimate is totally relevant to what should be worked on now.

Jacob: I don’t think my actions depend much on AI timelines, and the reason why is that I can point to the next fundamental issues, so let’s just work on that, whether AI is 20 years away or 50 years away or 100 years away. If it was 10 years away, maybe, but I think that’s very unlikely.

Luke: If somebody has most of their probability mass on AI being more than 150 years out, then this drastically reduces the likely relevance of anything you try to do about AI safety now, and maybe they shouldn’t be focused on AI safety but instead biosecurity.

Jacob: Okay, I think some sort of estimate – though maybe not exactly a probability distribution over years to AI – should impact what field you go into. But I don’t think it should have much bearing on the particular problems you work on in that field.