These are ethical “AI Principles” from Google, but they might as well be `technological principles’

This is entirely adapted from this link, courtesy of Google and Alphabet.

Objectives

  1. Be socially beneficial.
  2. Avoid creating or reinforcing unfair bias.
  3. Be built and tested for safety.
  4. Be accountable to people.
  5. Incorporate privacy design principles.
  6. Uphold high standards of scientific excellence.
  7. Be made available for uses that accord with these principles. (See important additional explanation at the primary source.)

Verboten

  1. Technologies that cause or are likely to cause overall harm.
  2. Weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people.
  3. Technologies that gather or use information for surveillance violating internationally accepted norms.
  4. Technologies whose purpose contravenes widely accepted principles of international law and human rights.

Google does qualify:

We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas. These include cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue. These collaborations are important and we’ll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe.

It’s clear this is dancing on a fence, but that uncomfortable position is inevitable in any optimization problem.

It’s curious, but these wouldn’t be bad principles for governments and polities to follow, either.

About ecoquant

See https://wordpress.com/view/667-per-cm.net/ Retired data scientist and statistician. Now working projects in quantitative ecology and, specifically, phenology of Bryophyta and technical methods for their study.
This entry was posted in American Statistical Association, artificial intelligence, basic research, Bayesian, Boston Ethical Society, complex systems, computation, corporate citizenship, corporate responsibility, deep recurrent neural networks, emergent organization, ethical ideals, ethics, extended producer responsibility, friends and colleagues, Google, Google Pixel 2, humanism, investments, machine learning, mathematics, moral leadership, natural philosophy, politics, risk, science, secularism, technology, The Demon Haunted World, the right to know, Unitarian Universalism, UU, UU Humanists. Bookmark the permalink.

Leave a reply. Commenting standards are described in the About section linked from banner.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.