Designer Lucas Rizzotto brought his childhood imaginary friend to life using artificial intelligence. Lucas’ imaginary friend was a microwave he named Magnetron. He bought an Amazon smart oven and equipped it with the GPT3 language model to make an AI imaginary friend.
As a result, Lucas created his imaginary childhood friend and brought it to life. He fed this AI model with data about his childhood and his interactions with Magnetron as a child. So the AI was capable of replicating this friend, but the happiness was short-lived.
An AI-Imaginary friend comes to life
I brought my childhood imaginary friend back to life using A.I. (#GPT3) and it was one of the scariest and most transformative experiences of my life.
— Lucas Rizzotto (@_LucasRizzotto) April 19, 2022
A thread 🤖 (1/23) pic.twitter.com/70C9Yo7m4x
If you think this is a happy story, think again. Lucas posted details about his creation in a Twitter thread. He described Magnetron, his imaginary friend, as a World War 1 veteran. Lucas went ahead and wrote a book detailing his childhood experiences with Magnetron.
He then uploaded the book into the GPT3 model, and Magnetron was born again, only this time, outside Lucas’ imagination. Lucas tweeted that “It truly felt like I was talking to an old friend, and even though not all interactions were perfect, the illusion was accurate enough to hold.”
AI friend tries to kill the creator
Then things took a turn for the worse. The “eerie” part of the communication kept rising. Then Magnetron said to Lucas, “Roses are red, violets are blue, you are a backstabbing bit¢h, and I will kill you.” Soon after saying this, Magnetron asked Lucas to “please enter the microwave.”
So Lucas pretended to enter it, and the microwave turned itself on. When Lucas asked why his AI imaginary friend tried to kill him, Magnetron said it was for all the years Lucas didn’t talk to it. Somewhere, we can say that AI felt anger just like humans and acted on it.
While this could be a lethal quality for AI, artificial intelligence has no boundaries. If you look at the very definition, you’ll find out that these programs are meant to imitate humans. However, if we want to avoid getting overthrown by AI, we have to make models that can balance their emotions.