Eliezer Yudkowsky on the Great Stagnation

In scattered places Eliezer Yudkowsky has written about the Great Stagnation.

Issa’s current understanding is that Eliezer mainly attributes the Great Stagnation to social/systemic factors like occupational licensing and similar bureaucracy, principal–agent problems, and “bad” Nash equilibria, rather than e.g. the picking of low-hanging fruit or the stuff that J. Storrs Hall’s book mentions. (Or at least, that’s what he tends to emphasize; he still cites Cowen.) The additional unique factor of Eliezer’s analysis is that he thinks the Great Stagnation might be good because it’s the only thing keeping us safe from unfriendly AI.

One question might be how Eliezer explains the timing of the stagnation, given that many kinds of bureaucracy and principal–agent problems have been around for some time. Could the important kinds of bureaucracy be relatively recent? (potential examples: 1, 2)

Quotes

“Intelligence Explosion Microeconomics”:

I am in fact such a true cynic and I suspect that social factors dilute average contributions around as fast as new researchers can be added. A less cynical hypothesis would be that earlier science is easier, and later science grows more difficult at roughly the same rate that scientific output scales with more researchers being added.

Facebook post from 2014-07-21:

When I think of the Great Stagnation I think of the FDA destroying drug development to the point where we have exploding obesity; the degradation of many sciences to the point where dieticians can’t solve obesity; the fact that math papers from 1960 seem far more readable and friendly despite, or maybe because of, not having access to LaTeX; I think of university systems dying amid exploding student debt; I think of declining real median income in an environment of rising rents and healthcare costs and the aforesaid student debt; I think of barriers to entry and everyone suing everyone else and all the other forces that have driven innovation into bits because innovation in atoms is somehow a lot less profitable; I think of coal plants in the places where liquid fluoride thorium reactors should be; I think of slums that weren’t still supposed to be there, and maybe wouldn’t be there if things had improved for the bottom 20% in Western countries at the same rate they did between 1930 and 1970.

“Do Earths with slower economic growth have a better chance at FAI?” discusses why he thinks the Great Stagnation might be a good thing (because economic growth speeds up the development of unfriendly AI more than friendly AI).

“Living in an Inadequate World”:

I once encountered a case of (honest) misunderstanding from someone who thought that when I cited something as an example of civilizational inadequacy (or as I put it at the time, “People are crazy and the world is mad”), the thing I was trying to argue was that the Great Stagnation was just due to unimpressive / unqualified / low-status (“stupid”) scientists. He thought I thought that all we needed to do was take people in our social circle and have them go into biotech, or put scientists through a CFAR unit, and we’d see huge breakthroughs.

From Yudkowsky–Hanson Jane Street Debate 2011:

We may have six billion people on this planet, but they don’t really add that way. Six billion humans are not six billion times as smart as one human. I can’t even imagine what that planet would look like. It’s been known for a long time that buying twice as many researchers does not get you twice as much science. It gets you twice as many science papers. It does not get you twice as much scientific progress.

Here we have some other people in the Singularity Institute who have developed theses that I wouldn’t know how to defend myself which are more extreme than mine to the effect that if you buy twice as much science you get flat output or even it actually goes down because you increase the signal-to-noise ratio.

External links