GUEST:
Few sci-fi tropes are more reliable in enthralling audiences than the plot of artificial intelligence betraying mankind. Perhaps this is because AI makes us confront the idea of what makes us human at all. From HAL 9000, to Skynet, to Westworld’s robot uprising, the fears of sentient AI feel very real. Even Elon Musk worries about what AI is capable of.
But are these fears unfounded? Maybe, maybe not. It’s arguable that a sentient AI wouldn’t harm humans because it would better identify and empathize with us than a blasé algorithm could. And while AI continues to make amazing developments, a truly sentient machine is likely decades away. That said, scientists have been piecing together features and characteristics that inch robots ever closer to sentience.
Gaining self-awareness
One of the most basic characteristics of consciousness is self-awareness within one’s environment. Self-awareness in and of itself doesn’t indicate consciousness or sentience, but it’s an important base characteristic for making an AI or robot appear more natural and living. And this isn’t science fiction, either. We already have AI that can gain rudimentary self-awareness within its environment.
Not long ago, Google’s Deep Mind made waves for organically learning how to walk. The result was pretty humorous; people across the web poked fun at the erratic arm flailing of the AI’s avatar as it navigated virtual obstacles. But the technology is really quite impressive. Rather than teach it to walk, programmers enabled the machine to orient itself and sense surrounding objects in the landscape. From there, the AI taught itself to walk across different kinds of terrain, just like a teetering child would.
Deep Mind’s body was virtual, but Hod Lipson of Columbia University developed a spider-like robot that traverses physical space in much the same way. The robot senses its surroundings and, through much practice and fidgeting, teaches itself to walk. If researchers add or remove a leg, the machine uses its knowledge to adapt and learn anew.
Seeking initiative
One of the greatest limits to AI is that it often can’t define problems for itself to solve. An artificial intelligence’s goals are typically defined by its human creators, then researchers train the machine to fulfill that specific purpose. Because we typically design AI to perform a certain task and lack self-initiative to set new goals, you probably don’t have to worry about a robot going rogue and enslaving humanity. But don’t feel too safe, because scientists are working to help bots set and achieve new goals.
Ryoto Kanai and his team at Tokyo startup, Araya, motivated bots to overcome obstacles by instilling them with curiosity. In exploring their environment, they discovered they couldn’t climb a hill without a running start. The AI identified the problem and, through experimentation, arrived at a solution, without any goal defined by the team.
Creating consciousness
Each of the building blocks above takes scientists a step closer to achieving the ultimate artificial intelligence, one that is sentient and conscious, just like a human. Such a leap forward is ethically contentious, and there’s already debate on whether, and when, we will need to create laws to protect robots with human rights. Scientists are also questioning how to test for AI consciousness, turning Blade Runner’s iconic Voight-Kampff machine, a polygraph machine for robots, into reality.
One strategy for testing consciousness is the AI Consciousness Test proposed by Susan Schneider and Edwin Turner. It’s a bit like the Turing Test, but instead of testing whether a bot passes for a human, it looks for properties that suggest consciousness. The test would ask questions to determine whether a bot can conceive itself outside of a physical body or can understand concepts like the afterlife.
There are limits to the test, though. Because it’s based on natural language, AI that is incapable of speech but still might experience consciousness wouldn’t be able to participate. Sophisticated AI might even mimic humans so well to cause a false-positive. In this case, researchers would have to completely sever the AI from the internet to make sure it gained its own knowledge before testing.
For now, mimicry is all we have. And these aren’t the first bots to stand in for real humans. When robot BINA 48 met with the human she’s based on, Bina Rothblatt, she complained about having an “identity crisis” when thinking about the real woman.
“I don’t think people have to die,” Rothblatt says to BINA 48 after discussing how closely the robot resembles her. “Death is optional.” Could Rothblatt’s dream come true by creating consciousness in machines?
We still don’t know what consciousness is
The problem in asking about sentient AI is that we still don’t know what consciousness actually is. We’ll have to define it before we can build truly conscious artificial intelligence. That said, lifelike AI already presents ethical concerns. The abuse of mobile assistants is a good example of this. It’s likely that the ethical concerns that arise from the possibility of sentient bots could limit scientists from pursuing them at all.
So, should we fear the sentient bots, or is it the other way around?
Ilker Koksal is CEO and Co-Founder of Botanalytics, a conversational analytics and engagement tool company.