This is entirely adapted from this link, courtesy of Google and Alphabet.
Objectives
- Be socially beneficial.
- Avoid creating or reinforcing unfair bias.
- Be built and tested for safety.
- Be accountable to people.
- Incorporate privacy design principles.
- Uphold high standards of scientific excellence.
- Be made available for uses that accord with these principles. (See important additional explanation at the primary source.)
Verboten
- Technologies that cause or are likely to cause overall harm.
- Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
- Technologies that gather or use information for surveillance violating internationally accepted norms.
- Technologies whose purpose contravenes widely accepted principles of international law and human rights.
Google does qualify:
We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.
It’s clear this is dancing on a fence, but that uncomfortable position is inevitable in any optimization problem.
It’s curious, but these wouldn’t be bad principles for governments and polities to follow, either.