Level of quantification in cause prioritization
Katja Grace in “Shallow Overview” lists the level of quantification as one of “project parameters” for cause prioritization research (page 9):
Level of quantification: quantification brings rigor, transparency, and the ability to talk about judgements. On the other hand, it often results in leaving out vague considerations, and considerations where one does not have a good conscious model, and can give a false appearance of accuracy. Systematic cause prioritization sets itself apart from more traditional methods of selecting causes in part through a certain level of quantification. However among those practicing prioritization, there is some disagreement about how far one should go.
(FIXME: Also check the footnotes referenced in the quote)
In “Passive vs. rational vs. quantified”, Holden Karnofsky gives some insight into how GiveWell assesses charities by comparing the process to that of buying a printer. In particular, the post argues against a “quantified” approach, favoring a “rational” approach:
The weakness of this approach, in my view, is that it takes an enormous amount of effort to do well, and even when done well generally involves so much guesswork and uncertainty that it’s questionable whether the results should influence one’s prior beliefs. Valid, high-certainty information that should shift one’s view (for example, “this printer takes up a lot of space”) can be lost in the noise of all the guesswork used to convert the information into a unified framework (for example, converting the space taken up into dollars gained).
When using a single unified equation, one mistake – or omitted parameter – can result in a completely wrong conclusion, even if much of the other analysis that was done is sound. The “rational” approach uses implicit model combination and adjustment, and is more likely to give a good answer even when not all of the inputs are reliable. It can also be more efficient in the sense of view-shifting information gained per person-hour spent.
A similar idea can be found in “Estimates vs. head to head comparisons” (Internet Archive). In particular:
This often means that it is better to try and make comparisons of the form “Is X better than Y?” than to try and independently estimate the value of X and Y. When making a comparison between X and Y, we can minimize uncertainty by making the analyses as similar to each other as possible.
External links
- Level of measurement
Carl Shulman’s comment on LessWrong:
I am a booster of more such estimates, detailed enough to make assumptions and reasoning explicit. Quantifying one’s assumptions lets other challenge the pieces individually and make progress, where with a wishy-washy “list of considerations pro and con” there is a lot of wiggle room about their strengths. Sometimes doing this forces one to think through an argument more deeply only to discover big holes, or that the key pieces also come up in the context of other problems.
“A Complete Quantitative Model for Cause Selection” by Michael Dickens.