Google has established a new code of ethics for guiding its approach towards artificial intelligence which includes the prohibition of AI-based weapons.
This announcement comes after the strong protest by Google employees over the company’s involvement in Project Maven. It is a collaboration project where US military aims to use Google’s AI technology to initiate an Algorithmic Warfare Cross-Functional Team.
The news of such a collaboration created fear about how technology could help automate warfare in the future. So, after facing pressure from employees and others over the contract, Google finally declared that it wouldn’t be renewing its contract.
In a blog post, Google’s CEO, Sundar Pichai announced a set of seven new principles for guiding Google’s use of AI. The guiding principles state that Google’s use of AI should be:
- Beneficial to the society
- Avoid algorithmic bias
- Safety tested
- Accountable to the public
- Maintain scientific excellence
- Made available to causes that are aligned with the same principles
He also noted that even though the company won’t use artificial intelligence for creating weapons, it will continue to “work with governments and the military in many other areas” which includes cybersecurity, training, and search and rescue.
Pichai also mentioned that Google will avoid developing any surveillance technology that would violate the internationally accepted norms of human rights or something that breaks international laws.
Artificial intelligence is extending to every walks of life and military uses of AI is not exactly a secret with more and more companies seeking to sell AI technology as much as possible. But it seems like even though Google has abandoned its “Don’t be evil” motto, it continues to retain its idealistic culture.