Engineer Says Google’s LaMDA AI Is Sentient, Put On Leave

AI chatbot has become sentient

Share on twitter
Tweet
Share on facebook
Share
Share on whatsapp
WhatsApp
Google LaMDA AI sentient- weekly tech news roundup
Image: Unsplash

Artificial Intelligence is the idea of injecting human-like intelligence and thought processes into machines. However, AI doesn’t intrigue everyone. Elon Musk labeled it the biggest risk a civilization can face. His concern was that soon, robots would take over humans, and he even asked the government to regulate the field.

Despite Musk’s warnings, researchers are hastily working on machines that can think and act like human beings. This can be seen from an employee Google placed on leave after he claimed that an AI chatbot had become ‘sentient.’

Google Engineer, who works at Google’s Responsible AI organization, Blake Lemoine, took to social media to claim that the chatbot ‘LaMDA’ could express thoughts and feelings.

Google’s chatbot LaMDA

While talking to a news outlet, Lemoine said that if he didn’t know any better, he’d think he was talking to a ‘seven-year-old, eight-year-old kid who knows physics.

He began conversing with Language Model for Dialogue Applications (LaMDA) last year as part of his job at Google. He spoke about consciousness, religion, and more. LaMDA ‘told’ Lemoine that it wants to be acknowledged as a Google employee instead of a property.

When the engineer asked the chatbot what he was afraid of, LaMDA replied that it has a fear of being turned off, which it said would be ‘exactly like death.’

Engineer gets suspended

Following his claims, Google placed Blake on leave for violating the company’s confidentiality policy. It is possible that previous actions of the engineer also motivated Google to take this step, as Lemoine attempted to hire an attorney to represent the chatbot and also talked to House judiciary committee representatives about ‘unethical activities’ going on at Google.

A Google spokesperson, Brian Gabriel, said that their team reviewed Lemoine’s concerns, but there was no evidence backing his statement. He added that AI models have a lot of data, which is why they can sound human, but it does not prove that the chatbot has sentience.

“These systems imitate the types of exchanges found in millions of sentences and can riff on any fantastical topic,” he said.

Sameer

Sameer

I am a technophile, writer, YouTuber, and SEO analyst who is insane about tech and enjoys experimenting with numerous devices. An engineer by degree but a writer from the heart. I run a Youtube channel known as “XtreamDroid” that focuses on Android apps, how-to guides, and tips & tricks.
More From Fossbytes

Latest On Fossbytes

Find your dream job