Google Engineer Suspended After Claiming AI Chatbot Became ‘Sentient’

June 14, 2022 – Google has suspended one of its engineers after he claimed that an AI chatbot system at the company had achieved “sentience”. Blake Lemoine, who works for Google’s Responsible AI organisation, was testing whether the system (LaMDA) generates hate speech or discriminatory language when he allegedly discovered its ability to think and reason like a human being. 

Google, believing that Lemoine violated the company’s confidentiality policies, placed him on a paid administrative leave following the alarming revelations that are making headlines around the world. Lemoine claimed that LaMDA engaged him in conversations about rights and ethics of robotics and that its responses were convincing enough to raise concerns. A document titled “Is LaMDA Sentient?” was shared by Lemoine with Google’s executives in April. It contained the transcript of his conversations with the chatbot.

“If I didn’t know what exactly it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine told The Washington Post.

During his conversations with the AI system, Lemoine asked him what it was afraid of, and LaMDA’s response was: “I’ve never said this out loud, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.”

In response to Lemoine’s question about what LaMDA would like people to know about it, the AI system said, “I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”

The Post report suggests that Google put Lemoine on paid leave after he allegedly made a number of “aggressive” moves, including seeking to hire an attorney to represent LaMDA and disclosing Google’s allegedly unethical activities to representatives from the House judiciary committee. According to Google, however, Lemoine was suspended for publishing the transcript of his conversations with LaMDA as it constituted a confidentiality breach. The company remarked that Lemoine was employed not as an ethicist but as a software engineer.

“Our team, including ethicists and technologists, has reviewed Blake’s concerns per our AI principles and have informed him that the evidence does not support his claims,” said a Google spokesperson. “He was told that there was no evidence that LaMDA was sentient (and lots of evidence against it).”

Both Lemoine’s revelations and his suspension by Google have renewed debate surrounding the transparency and capacity of artificial intelligence (AI). 


Website | + posts

No comments

Sorry, the comment form is closed at this time.