Google’s new ‘AI principles’ forbid its use in weapons and human rights violations

Google  has published a set of fuzzy but otherwise admirable “AI principles”explaining the ways it will and won’t deploy its considerable clout in the domain. “These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions,” wrote CEO Sundar Pichai.


The principles follow several months of low-level controversy surrounding Project Maven, a contract with the U.S. military that involved image analysis on drone footage. Some employees had opposed the work and even quit in protest, but really the issue was a microcosm for anxiety regarding AI at large and how it can and should be employed.
Consistent with Pichai’s assertion that the principles are binding, Google Cloud CEO Diane Green confirmed today in another post what was rumored last week, namely that the contract in question will not be renewed or followed with others. Left unaddressed are reports that Google was using Project Maven as a means to achieve the security clearance required for more lucrative and sensitive government contracts.
The principles themselves are as follows, with relevant portions quoted from their descriptions:
Powered by Blogger.