Short Bytes: Scientists at Microsoft Research are working on an AI which will be able to tell stories by analyzing images fed to it. The research will be presented at the North American Chapter of the Association for Computational Linguistics next month.
Easy to decipher, these AI bots, or I should call them baby brains because scientists are helping them know the world just like a parent does to its toddler, will take over human one day. Whether in terms of quickness, or the large database of knowledge, they would acquire in the coming years of development.
Microsoft Research scientists are working on an AI bot which can tell stories by analyzing a photo. Well, we already have artificial intelligence capable of distinguishing between photos without any stumbling block, Google Images can be an appropriate example for that. Telling stories based on a photo is one step further and at the current point of time, it’s hard to digest. After all, how can a brain with electrical circuits fitted inside tell about the expressions of a person in a picture?
“The goal is to help give AIs more human-like intelligence, to help it understand things on a more abstract level — what it means to be fun or creepy or weird or interesting. People have passed down stories for eons, using them to convey our morals and strategies and wisdom. With our focus on storytelling, we hope to help AIs understand human concepts in a way that is very safe and beneficial for mankind, rather than teaching it how to beat mankind” – said Margaret Mitchell of Microsoft Research who is the senior author of the study.
Now, what lies behind the scenes is a deep neural network that scientists are using to train the AI bot in order to improve its visual storytelling abilities. Thousands of images, let’s say of a dog, are spoon-fed to the AI bot so that it can learn how a dog looks like and recognize it in future.
They crowd-sourced image description using Amazon’s Mechanical Turk platform where people wrote information about groups of images. The data of around 65000 images and their stories was fed to the AI bot.
In the testing phase, another 8,100 fresh images were presented to the AI for which it had to come up with a story. The output by the AI was far better than what people like us could expect.
According to the scientists, there is still a long road to go. One day, AI may be able to say something by analyzing a video or even a live broadcast. I’m afraid sports commentators should have alternative career plans by then. You won’t have to type a long description while uploading your photos on social networking platforms, the AI will do that for you. Developments like these will also be beneficial for the visually impaired people.
Read the article published by Live Science for more information.