Nick Bostrom in “Existential Risks”:

Other technologies that have a wide range of risk-reducing potential include intelligence augmentation, information technology, and surveillance. These can make us smarter individually and collectively, and can make it more feasible to enforce necessary regulation. A strong prima facie case therefore exists for pursuing these technologies as vigorously as possible.

Forms of surveillance include governmental, corporate, and international agreement verifications.

Dual-use technology

Surveillance could be used in a negative way by authoritarian political institutions, but could also be used in a positive way to prevent the use of dangerous technologies.


Data collection continues to increase (Gasser et al., 2016_.

Ability to use this data continues to increase.

  • Partly a matter of analysis (e.g. better data mining)
  • Partly a matter of scalable incentive-shaping (e.g. social credit scores)

Two-way transparency

In a footnote:

In the case of surveillance, it seems important to aim for the two-way transparency advocated by David Brin […], where we all can watch the agencies that watch us.

However Michael Huemer notes that there is no incentive currently to monitor the government (§9.4.4 of The Problem of Political Authority):

But no one will become passionate about monitoring a thousandth of the daily activities of government. To propose that the general public voluntarily sacrifice large portions of their lives to the task of studying such tedious matters as the provisions of the latest farm bill, all so that each can have a microscopic chance of improving a microscopic fraction of government policies, is at least as utopian as proposing that we all simply agree henceforth to work selflessly for the good of society.

Avoiding trade-off

Surveillance is often seen as a tradeoff with privacy and/or accountability, but there exist ways to improve both at the same time.1 For example, dogs in airports can detect explosives and illegal drugs without having to open the bags of travelers, hence respecting their privacy.

Automation might allow to improve both privacy and surveillance.

It can act as a “screen” between initial data and humans.

  • Can make initial judgments (like sniffing dogs)
  • Can redact sensitive information (e.g. automatic face blurring)

In certain regards, far more predictable and less opaque than humans.

  • Not a complete “black box”, as humans are
  • Easier to associate with reliable audit logs
  • Less likely to engage in certain abuses (e.g. LOVEINT)

Make auditing more “scalable”.

  • Easier to audit a single piece of software used in a wide variety of cases than a large number of humans
  • Easier to associate with summary statistics (e.g. accuracy rate)

Also, some form of surveillance can work only with metadata and avoiding data collection.

  • “Set-intersection searches” identify individuals as suspicious if they show up in sufficiently many different data sets (e.g. cell records in different localities); (Segal et al, 2014) show how to conduct them without data collection
  • Fraud detection often involves detecting discrepancies between different data sets; (Bogdanov et al, 2015) shows how to find discrepancies between companies private financial records without collecting them

Accountable algorithms can also be used for that.

  • (Krolls et al., 2016) shows how “zero-knowledge proofs” can be used to produce “accountable algorithms”

It’s often possible for the public to detect when actors stray from algorithms that have received auditor approval – or from algorithms with certain formal properties – without revealing their details.

External links

See also

  1. The Future of Surveillance (video by Ben Garfinkel)