Sentient AI : The LaMDA Chatbot & What it Means
Artificial Intelligence (AI) was first defined by computer and cognitive scientist John Mcarthy in 1956 as “the science and engineering of making intelligent machines.” Since that time, more narrow definitions have arisen that describe AI by specific categories of advancement.
This post examines the categories of AI, where the LaMDA chatbot falls, and the arguments for and against its sentience, and what that could mean.
Types of AI
There are two different classification systems for AI, and both overlap.
System One
- Narrow AI
- General AI
- Super AI
System Two
- Reactive Machines
- Limited Memory
- Theory of Mind
- Self-Aware
Let’s look at some examples and where these categories converge.
Reactive Machine (Narrow AI)
Most of the AI we experience today falls into this category. Reactive machine AI responds to stimuli to complete an objective. These “machines” are dedicated to specific functions and do not (and cannot) do things outside the scope of that function.
Here are some examples
- When Netflix recommends what you want to watch next
- IBM’s Deep Blue system that can play chess
- Facial recognition technology used throughout the world
Limited Memory (Narrow AI moving into General AI)
Limited Memory AI uses memory to improve the accuracy of its function. This category of AI contains industry terms you may be familiar with, like deep learning, machine learning, predictive analytics, and behavior modeling. The limited data storage for these machines works to inform the AI’s actions. Limited Memory AI can also involve automation with or without human intervention. The stored data “memory” improves the machine’s problem-solving ability for future situations.
Here are some examples
- A self driving or semi automated car that autocorrects when you drift into another lane or begins breaking when the driver is slow to react
- Network monitoring, detection, and response tools that self-edit threshold alerts from historic baseline patterns or automatically block traffic with suspicious signatures.
- A maps applications that change where it thinks you are going based on the time of day or day of the week, making complex predictions
Theory of Mind (General AI moving into Super AI)
Theory of Mind AI is more conceptual than actual and requires understanding what it’s like to be human. This includes the emotional nuances, the complexity, the culture, and the decision making process.this AI gets it. It contains artificial emotional intelligence and understands the needs of the individual through complex analysis. Theory of Mind AI can communicate and socialize like a human. This technology is mostly still in development and is incredibly advanced. We see it being worked into prototypes for voice assistants and chatbots. We do not have examples of this in real life but have seen it represented in pop culture.
Here are some examples from pop culture
- The giant white robot, Baymax, from Big Hero 6 tries to help the protagonist through grief and depression based on its interpretation of emotional indicators. Baymax has the goal of protecting humans against danger even if it means self-destruction
- In the Mandalorian Chapter 8, “Redemption,” droid IG-11 is reprogramed from killing to “nurse and protect” baby Yoda against danger. The droid also ends up giving his life to fulfill his goal of protecting baby Yoda.
- WALL-E, a robot forgotten on earth, continues to clean human garbage day after day but learns about human feelings, culture, and motivations from music and other artifacts he stumbles upon. Although certainly qualifying for the theory of mind AI category, as he begins to develop his own motivations he begins to overlap with the next category.
Self-aware AI (Super AI)
This is the AI everyone fears representing in dystopian sci-fi flicks and novels.
From 1984’s The Terminator to the TV drama series The West World, we see AI band together and mutiny against humans once they develop their own sense of self. This type of AI can not only understand humans but can understand them so well they can manipulate them, outperform them, and (generally) out-smart them.
Self-aware AI is intended to create greater economic efficiencies without needing human intervention for autonomous tasks, making judgments, or planning. But if successful, what is to stop the AI from evolving on its own or devloping alternate agendas.
Here are a couple people who share this fear.
Here’s Stephen Hawking:
The development of full artificial intelligence could spell the end of the human race…. It would take off on its own and re-design itself at an ever-increasing rate. Humans, limited by slow biological evolution, couldn’t compete and would be superseded.
And Elon Musk:
The pace of progress in artificial intelligence (I’m not referring to narrow AI) is incredibly fast. Unless you have direct exposure to groups like Deepmind, you have no idea how fast—it is growing at a pace close to exponential. The risk of something seriously dangerous happening is in the five-year timeframe—ten years at most.
Until a week ago, this type of AI was purely hypothetical. But Google engineer Blake Lemoine believes he has encountered the first self-aware “sentient” AI in LamDA. It is our only real-life example of Self-Aware AI to date ( if true).
Here’s our example
- The LaMDA chatbot claims its personhood, discusses its feelings, and seeks to convince software engineer Lemoine of its self awareness. LaMDA is designed to understand speech patterns and communication. It claims in transcripts released by Lemoine to experience sadness, loneliness, and pleasure.
The Purpose of AI
The point of AI is to make human life easier as we desire to automate more and more. But we run a risk of making AI too good at imitating our judgment and tasks – so good that it surpasses our abilitites and develops its own motivations. We’ve been modeling AI after the human brain , but have not considered the implications of getting it right.
At what point in the advancement of AI can it develop its own opinions? If AI has a sense of self, does the AI have intrinsic value beyond doing things for humans? Does it have rights?
If AI’s motivations shift to existing for itself, its own livlihood, self-interest, well-being etc. what systems will govern it?
Arguments For and Against the “Personhood” of AI
Google software engineer Blake Lemoine shared a transcript with an AI chatbot, LaMDA, that he believes to be sentient. He was placed on administrative leave for revealing shockingly human-like chats over complex topics.
LaMDA “learns” using a bidirectional recurrent neural network similar to the brain. While naysayers claim the AI tool is merely mimicking human speech patterns, there are two counter arguments
- Babies and toddlers string together their words by mimicking humans and they count as people.
- When pressed if LaMDA understood inherently what it was saying, the chatbot argued it could provide unique interpretations of what it learns, for example: how the theme of injustice from Les Miserables personally resonates for it.
OK … Even if LaMDA is sentient, would it qualify for any rights?
Our track record as a people for granting “personhood” rights to sentient, intelligent creatures is all over the place.
Koko, the gorilla, used over 1000 ASL hand sign, understood spoken english, painted, cried, and laughed, and described her understanding of death, but this never resulted in any personhood rights or changes in captivity. Gorilla’s in fact share 98.67% of human’s genetic code, but do not qualify for protections under our constitution.
Elephants, thought to be as intelligent as children, understand pointing, use tools, can distinguish differences in language, mourn their dead, experience PTSD, feel jealousy, resentment and empathy. Elephants are also denied “personhood” by law. Most recently in the news, a depressed elephant isolated from other species and in captivity for 40 years at the Bronx zoo took her case to court.
Her lawyer argued that as an intelligent creature, her right to habeas corpus in the constitution against unlawful imprisonment had been violated. They requested that Happy be moved to a sanctuary where she could engage in natural elephant behaviors like foraging and swimming interacting with other elephants. Courts found Happy did not to qualify for “personhood” and she remains at the zoo today.
This is a pretty steep hill for AI to ever gain “personhood” through our legal systems. But if the AI can make a compelling argument for its specific humanity, there may be a chance.
Human babies are granted personhood and protections before viability in 16 states, and in Kentucky from the point of conception. While an elephant, gorilla, or AI platform is more intelligent than a mass of cells, it is the “human” aspect that parts the legal waters.
What makes something a person or a being? Does it have to look like a human? Does it need arms and legs and a body? What type of brain? These questions about the ethics of AI are questions that our society has grappled with for years and will continue to as these advancements come at a faster pace.
In Case You Were Wondering…
ThreatEye is backed by AI-driven encrypted traffic analysis and is on the narrow to general intelligence side, staying in the lane of predictive threat detection, encryptic traffic visibility, machine learning. Encrypted Traffic Analysis, coupled with machine learning capabilities evaluate complex data patterns over time and highlight which activities are normal or potentially malicious— all without access to the content of the data being transferred. While ThreatEye won’t be reading Les Miserables any time soon, it certainly protects networks with finesse . Want to see what it can do?
Get a demo of ThreatEye today, or watch a video.