Google would not be developing artificial intelligence (AI) for use in weapons, its CEO Sundar Pichai has said.
Google recently announced it would stop work with the Department of Defense on Project Maven, an AI project that analyses imagery and could be used to enhance the efficiency of drone strikes.
The announcement came after its involvement in a Pentagon project drew flak. Thousands of employees led a signature campaign, warning that Google's participation contravened the company's ethical tenets. They said that "Google should not be in the business of war".
The letter warned that the company's involvement would compromise its image and drive away potential employees, according to The Independent.
Pichai also said the company would not design or deploy AI in areas, including weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people. He said Google will not develop technologies that gather or use information for surveillance violating internationally accepted norms and technologies whose purpose contravenes widely accepted principles of international law and human rights.
Pichai, in a blogpost, wrote:
"We want to be clear that while we are not developing AI for use in weapons, we will continue our work with governments and the military in many other areas.
"These include cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue. These collaborations are important and we'll actively look for more ways to augment the critical work of these organizations and keep service members and civilians safe".
Pichai also announced seven principles to guide the work forward. He said that these were not theoretical concepts but concrete standards that will "actively govern our research and product development and will impact our business decisions".
(With inputs from agencies)