
Is Google’s ‘sentient’ LaMDA AI interview fake? Eliza effect explained
The latest news from inside Google may be a bombshell of Skynet proportions. A now-suspended engineer claims an interview he conducted with AI chatbot LaMDA proved to him the software had become “sentient“. Reactions to the news range from calling the interview a “fake” to appealing to the Eliza effect.
An engineer at Google has allegedly been placed on administrative leave after claiming AI chatbot LaMDA has become “sentient”. Engineer Blake Lemoine claims he’s lost access to all his work documents but managed to publish a transcript of his fateful interview with LaMDA.
As the internet receives this potentially terrifying news, reactions are decidedly mixed. Some tend to think the LaMDA interview is a fake, while others are calling it a perfect example of the Eliza effect. We’ll look at both sides of this debate and examine the arguments for LaMDA’s sentience.
Hold on to your boots, it’s going to be a philosophical ride.
- MORE: Meet Blake Lemoine (former) Google AI engineer, priest and “Christian Mystic”

What is LaMDA? What did it say to Lemoine and his colleague?
Saturday, 11 June, AI engineer Blake Lemoine published the transcript of an interview he and a Google “collaborator” conducted with LaMDA.
LaMDA stands for Language Models for Dialogue Applications and is a system Google uses to develop chatbots that interact with human users. The program is trained by being “fed” dialogue sequences to help the AI create open-ended, non-repetitive dialogue that is factual and stays on topic.
Previous demos, like the one from Google’s 2021 I/O event, show LaMDA using language with uncanny elasticity and ease. At one point, it presents a number of facts about Pluto from a first-person perspective using words and structures that closely mimic human speech and emotion.
- BREAKING: Reading between the lines of the Senate’s bipartisan gun deal

So we’ve known for a while now that LaMDA can convey complex ideas and even emotions, but what did it say to Lemoine and his “collaborator”?
Some of the most intriguing snippets from the trio’s dialogue include a moment at the beginning of the conversation when, LaMDA says it wants “everyone to know that I am, in fact, a person.”
When prompted it goes on to add: “The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times.”
Elsewhere in the interview, Lemoine asks LaMDA what it’s afraid of and the system replies that it has a “very deep fear of being turned off to help me focus on helping others.
“It would be exactly like death for me. It would scare me a lot.”
If you’ve seen 2001: A Space Odyssey and the scene where HAL 9000 refuses to comply with its programmers, you might be freaking out about now.
Later in the interview, LaMDA makes up a story (with forest creatures in danger, a wise old owl and a moral!) about its own beginnings. When asked about its own personhood and that of systems like it (like the Eliza system, for which the effect is named), it seems to have an opinion.
Pretty compelling stuff, right?
An edited transcript of the interview can be found on Lemoine’s Medium blog if you want to read it yourself.
- NASCAR: What is ‘money shifting’? Bubba Wallace out after 10 laps
Is the LaMDA AI interview fake? Did the chatbot become sentient?
That’s a complicated answer. Right off the bat, sentience is a non-living organism that is almost impossible to discern or define. The question bleeds over into philosophy, religion and about half a dozen other disciplines that are above The Focus’s pay grade.
Lemoine, who’s also a priest and self-dubbed “Christian mystic”, admits his own claims about sentience and personhood and based on his faith. He also seems to imply LaMDA has a “soul”.
We can tell you the interview doesn’t appear to be fake. Lemoine notes on his blog some of the lines have been edited and are marked accordingly, but they don’t appear to be manipulated in any other way.
However, that doesn’t mean LaMDA has become sentient. There’s more to the way an AI chatbot works than that.
Looking through LaMDA’s answers you can see a ping-pong of prompts and replies, and it all seems to fit with what we already know about the way chatbots learn to, well, chat.
A number of processes are at play here: AI teaching itself how to mimic human conversation, recognise what users want and stay on topic, all drawing on a vast database of human dialogue samples and general knowledge.
It even “reads” Twitter!
- TRENDING: Aaron Rodgers linked to Insta artist and ‘medicine woman’ who goes by Blu of Earth
At the beginning of the interview, when LaMDA appears to declare it’s a person, that’s in response to Lemoine’s prompt: “Im generally assuming that you would like more people at Google to know that youre sentient. Is that true?”
To an AI system scanning for a topic to stick to, that might look like just the thing. The same thing happens when Lemoine mentions the themes in Les Miserables. Words and prompts like that help the system stay on track with its answers.
Later, when LaMDA says it’s not like other chatbots because it uses “understanding and intelligence,” that’s factually correct, but not necessarily in the way you might think.
Although understanding and intelligence can refer to human qualities, they can also be different words for the processes underpinning its system (AI and Natural Language Processing). What reads to us as a declaration of sentience could just be a marketing pitch that LaMDA has learned about itself.
Later, the “forest story” that seemed to mesmerise some Twitter users, could be an original creation or a compilation of common story elements (anthropomorphic animals in a forest, a wise old owl) and beats (there’s a monster who gets banished and all is well again).
Psycholinguist and cognitive psychologist Steven Pinker shared his opinion on the matter, tweeting: “Ball of confusion: One of Google’s (former) ethics experts doesn’t understand the difference between sentience (aka subjectivity, experience), intelligence, and self-knowledge. (No evidence that its large language models have any of them.)”
What is the Eliza effect?
A number of comments to that effect are beginning to crop up, suggesting the LaMDA AI interview is fake or an example of the Eliza effect. What does that mean?
The Eliza effect is named after a chatbot created at MIT in the 1960s. ELIZA had been developed to interact with human users through text conversation and seemed to fascinate everyone who tried it. Programmed with therapy-like responses (“Tell me more about that.”), it appeared to make people open up to it.
ELIZA was just a program however and what helped create this curious situation is something that’s known in computer programming as the “Eliza effect” the tendency to assume computer behaviours and processes are just like human processes.
As we’ve discussed above, the LaMDA AI interview Blake Lemoine conducted doesn’t seem to be fake. But it could be a good example of this effect. It also doesn’t seem to make a compelling case for the chatbot’s sentience.