Google Promises Its A.I. Will Not Be Used for Weapons


Image by Reuters clipped header original

Google pledged Thursday that it will not use artificial intelligence in applications related to weapons, surveillance that violates worldwide norms, or that works in ways that go against human rights.

Google insisted last week that its AI technology is not being used to help drones identify human targets, but told employees that it would no renew its contract after it expires in 2019.

While Google always said this work was not for use in weapons, the project may have fallen foul to the new restrictions, as Google said it will no longer continue with Project Maven after its current contract ends. But it is also key to its future ambitions, many of which involve ethical minefields of their own, including its self-driving Waymo division and Google Duplex, a system that can be used to make dinner reservations by mimicking a human's voice over the phone.

Be made available for uses that accord with these principles. "Google is already battling with privacy issues when it comes to AI and data; I don't know what would happen if the media starts picking up a theme that Google is secretly building AI weapons or AI technologies to enable weapons for the Defense industry".

Over 4,000 Google employees ended up protesting Google's involvement with the Pentagon, saying in an open letter that Google should not be in the "business of war".

More news: Alexander Zverev is finally realising his potential in show of stamina

In publishing a memo like this, Google isn't taking some bold stance or even actually committing to uphold any particular belief, but just trying pull itself out from under its most recent scandal.

Google's contract with the Defense Department came to light in March after Gizmodo published details about a pilot project shared on an internal mailing list. We aspire to high standards of scientific excellence as we work to progress AI development.

We will incorporate our privacy principles in the development and use of our AI technologies. We will work to limit potentially harmful or abusive applications. Several employees said that they did not think the principles went far enough to hold Google accountable-for instance, Google's AI guidelines include a nod to following "principles of worldwide law" but do not explicitly commit to following global human rights law. Military applications Google will still pursue include those in "cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue".

"We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas", Google's CEO Sundar Pichai said in a separate blog post. "These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe".

Latest News