Elon Musk Google Like search engine

Mark this day! The day when leaders of tomorrow came together on one ground and vowed to never indulge in building future killer robots.

In an International joint conference on Artificial Intelligence (IJCAI), Elon Musk, three co-founders of Google AI subsidiary, and a few experts in AI have signed a pledge to never build or be a part of “lethal autonomous weapons.” The big coalition of AI experts and researchers was organized by Future of Life Institute whose mission is to support initiatives of AI and curb “existential risks” of AI.

More than 2400 individuals including Skype founder Jaan Tallinn, and 160 companies grouped in Stockholm, Sweden and declared that they won’t take part in the development, manufacturing, or trading of autonomous weapons.

The now published letter asks “technology companies and organizations, as well as leaders, policymakers, and other individuals” to create rigid international norms to curb the upcoming threats of AI and frame some legal laws against the defaulters.

“AI has huge potential to help the world — if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way,” Future of Life Institute President Max Tegmark said in a statement.

Previously, the future implications of Artificial Intelligence have garnered a lot of concerns which resulted in Google coming up with AI ethics. Whereas Elon Musk has raised concerns about the dangers of AI in an open letter.

Also Read: Man Uses Google Assistant To Fire A Gun