Goals of the wiki

Some thoughts:

  • Focus on shallow investigations?—“breadth first” vs “depth first”
  • Final form a wiki? What about a blog? Or something else entirely?
  • How to look at causes?
    • Importance, tractability, and neglectedness? or, more generally, General questions for causes
    • A single number for each cause/a ranking?
    • Separate rankings for each “cluster” of causes (e.g. existential risks, global health [which is what GiveWell has done], biomedical research, and so on)
    • Have a separate ranking for each “type of action/intervention” (e.g. cause A is best if you want to give money, cause B is best if you want to volunteer, cause C is best if you want to do academic research, etc.)?
    • Cluster thinking/sequence thinking applied to causes?
    • Look at Owen Cotton-Barrat’s approach to modeling.
    • PESTEL framework, or PESTEL x INT framework, so things like “do things under politics tend to be tractable?”, etc.
    • One can also split causes into those that are controversial, and those that aren’t (or have a gradient). If a cause is controversial, it’s important first to actually establish that it’s something worth working on (because putting effort into something that’s likely to cause harm is wasteful), so the format for those pages might take on a “pros and cons” sort of thing. If a cause is already considered good (and no one really thinks it’s bad) then this “pros and cons” doesn’t make sense; we just want to know how big of a deal the cause is.
    • One of the big things to think about is that given different assumptions, the importance of certain causes can fluctuate wildly.1 And the assumptions sometimes seem to be a matter of “personal taste” or “intuitive assumptions”—which is precisely the sort of thing cause prioritization is trying to eliminate by conducting rigorous analyses of causes. Now the thing is, sometimes it’s very hard to figure out which assumptions are right, so we might only be able to say something after aggregating many different moral philosophies2. One approach to CP, then, is to consider the importance of each cause given these different assumptions, and to rank them in each situation (sort of like how the IPCC considers different scenarios?).
  • What to optimize for? Examples:
    • For the cause that has the best potential for online advocacy a la Open Borders? This was sort of the implicit original plan, in that I wanted to figure out what cause I should make a website for.
    • Finding what the best causes are, period: treating CP as an end-goal instead of as a means to figure out what to do next.
  • How best to communicate comparisons? Examples:
    • A table on wikis, e.g. Cognito Mentoring wiki likes to use tables for comparisons (example). See also Groupprops, with pages like this one. There are also things like WikiMatrix, e.g. here that let you compare two things along different parameters, though causes may be too dissimilar for this to work nicely. I’ve made a small prototype at Issa Rice/Priors for CP that might be useful at some point.
    • A series of blog posts might be another answer.
    • A web of interconnected wiki pages, which seems to be the current default trajectory.
    • I also now have a test MediaWiki instance up and running that might become the future of CP Wiki, e.g. it has Semantic MediaWiki support for organized data on causes (though I’m not sure we want to go there)
  • Creating some sort of community around CP itself…
    • Almost like an “organic” version of the Open Philanthropy Project (GW Labs), where anyone can Be Bold and add to something, instead of all the research coming from one place (how to ensure quality?)
    • As little jargon as possible, with links to terms like “EA” (if they have to be used) and “existential risks”

  1. For instance, the importance of reducing animal suffering depends heavily on what people’s positions are on the “moral weight” of animals. As Scott Alexander has said in “A Series Of Unprincipled Exceptions”:

    Most people intuitively believe that animals have non-zero moral value; it’s worse to torture a dog than to not do that. Most people also believe their moral value is some function of the animal’s complexity and intelligence which leaves them less morally important than humans but not infinitely less morally important than humans. Most people then conclude that probably the welfare of animals is moderately important in the same way the welfare of various other demographic groups like elderly people or Norwegians is moderately important – one more thing to plug into the moral calculus.

    In reality it’s pretty hard to come up with way of valuing animals that makes this work. If it takes a thousand chickens to have the moral weight of one human, the importance of chicken suffering alone is probably within an order of magnitude of all human suffering. You would need to set your weights remarkably precisely for the values of global animal suffering and global human suffering to even be in the same ballpark. Barring that amazing coincidence, either you shouldn’t care about animals at all or they should totally swamp every other concern. Most of what would seem like otherwise reasonable premises suggest the “totally swamp every other concern” branch.

  2. This is referred to using different names. For instance Nick Beckstead discusses this in his PhD thesis as “curve fitting” over moral philosophies, and Will MacAskill also does this in his thesis by talking about “maximum choice-worthiness”. See also “Moral uncertainty – towards a solution?” by Nick Bostrom (in collaboration with Toby Ord) which talks about a parliamentary model of dealing with moral uncertainty.