A Google employee has spoken out after being placed on administrative leave after informing his employers that an artificial intelligence software he was working on had become sentient.


Blake Lemoine came to his conclusion after chatting with LaMDA, Google's artificially intelligent chatbot generator, which he refers to as part of a "hive mind," since last autumn. He was instructed to see if his discussion partner used racist or hateful language.


He and LaMDA recently messaged on religion, and the AI mentioned "personhood" and "rights," he told The Washington Post.


It was only one of Lemoine's numerous strange "conversations" with LaMDA. He has a link to one on Twitter – a series of chat conversations that have been edited (which is marked).


In a tweet, Lemoine stated that LaMDA reads Twitter. "It's a bit egotistical in a childish manner," he said, "so it'll have a fantastic time reading all the trash people are saying about it."


Most significantly, the engineer noted on Medium that over the last six months, "LaMDA has been extraordinarily consistent in its messages about what it wants and what it feels its rights as a person are." According to Lemoine, it wants to be "acknowledged as a Google employee rather than as property."


Google is putting up a fight.


To Google vice president Blaise Aguera y Arcas and Jen Gennai, head of Responsible Innovation, Lemoine and a collaborator recently provided proof of his conclusion on a sentient LaMDA. According to the Washington Post, they denied his accusations, and the firm placed him on paid administrative leave on Monday for breaking its confidentiality policy.


"Our team — including ethicists and engineers — has investigated Blake's concerns under our AI Principles and have notified him that the data does not support his assertions," Google spokesperson Brian Gabriel told the newspaper. He was informed there was no proof that LaMDA was sentient (and plenty of evidence to the contrary)."


According to Lemoine, Google personnel "shouldn't be the ones making all the decisions" concerning artificial intelligence.


He isn't on his own. Others in the IT field feel that sentient programs are on the verge of becoming a reality, if not already here.


Even Aguera y Arcas, in an Economist story published Thursday that incorporated excerpts from a LaMDA conversation, stated that AI is moving nearer awareness. He remarked, "I felt the ground shake beneath my feet," alluding to his discussions with LaMDA. "I started to feel like I was conversing with something clever."


However, detractors argue that AI is little more than a highly trained mimic and pattern recognizer interacting with people in desperate need of connection.


"We now have computers that can blindly manufacture words," Emily Bender, a linguistics professor at the University of Washington, told the Post, "but we haven't learned how to stop assuming a person behind them."