Artificial Intelligence is developing at a rapid pace with the advent of intelligent assistants such as Siri and Alexa that can accomplish a myriad of tasks.
These virtual assistants can understand as well as speak natural language voice commands and have managed to pique our scientific curiosity.
But how much of common sense do they actually possess? The researchers at Allen Institute for AI (AI2) have come up with an answer through their new test called the Arc Reasoning Challenge (ARC).
ARC test can determine the level of common sense in an AI based on its understanding of the ways of our world.
As human beings make use of common sense to grasp the unsaid context of a speech, we can deliver appropriate answers in an understandable yet implicit manner.
“Machines do not have this common sense, and thus only see what is explicitly written, and miss the many implications and assumptions that underlie a piece of text”, said Peter Clark, the lead researcher on ARC.
The Arc Reasoning Challenge is a test which includes basic multiple-choice questions that are based on general knowledge.
For instance, here’s one ARC question: “Which item below is not made from a material grown in nature?” The options for this question are a cotton shirt, a plastic spoon, a wooden chair and a grass basket.
Anyone with a basic understanding of how plastic cannot be grown can easily answer the question.
This lack of common sense in the AI systems like voice assistants and translation software is the primary reason why they can get confused so easily.
But if a machine succeeds in passing the ARC test, it would imply that the AI has an understanding of the common sense used in our language system which no other artificial system possesses at present.
This step in itself would be a significant leap of artificial intelligence towards perfection and one step closer to the day when these systems take over the world.