Short Bytes: AI is developing and doing better and better every day. But the negative consequences of AI becoming smarter can’t be ignored. DeepMind, a Google AI division based in the UK, has created an AI safety group to keep a check on dangerous AIs.Artificial Intelligence is making computers smarter by giving them the ability to think like humans and even surpass them in the coming years.
Such systems are being regarded as the helping hands of the humans. Yet, the possibility can’t be ruled out that these thinking machines may one day kick our ass and take control of our race. Stephen Hawkings – who has criticized poorly managed development of AI – talked about the consequences of AI advancement last month at the Cambridge University where he said that the development of AI can either be best or the worst thing ever happened to humanity.
That day, it would be too later to regret and realize that we have created a nuke for the extinction of the human race. DeepMind is a known name in the artificial intelligence field. They have already presumed what the AI system could become and they have already begun preparing in advance.
According to Business Insider, an AI Safety Group is formed under DeepMind’s roofs to keep an eye on the development of artificial intelligence systems and make sure it transforms itself into something that’s not good for the health of the humans.
Viktoriya Krakovna (@vkrakovna), Jan Leike (@janleike), and Pedro A. Ortega (AdaptiveAgents) are the three people who have recently been appointed as Research Scientists as the part of the AI safety group at DeepMind. Not many details are available about the group.
Find your dream job
AI safety group will work on reducing dangers from AI development.
Krakovna is also the co-founder of the Future of Life Institute located in Boston area. The institute – backed by popular names like Morgan Freeman, Stephen Hawkings, Elon Musk – works to eliminate threats to human society from AI, nuclear power, etc.
Among other two research associates, Jan Leike is a research associate at the Future of Humanity Institute, University of Oxford. His research work is focused on making machine learning robust and beneficial.
Pedro Ortega who goes by the name AdaptiveAgents is a Ph.D. graduate in Engineering from the University of Cambridge. Before joining DeepMind, he was a postdoctoral scholar at the University of Pennsylvania.
If you have something to add, tell us in the comments below.
More about Google’s AI: