In part 2 of this series we looked at robots and robotic devices that are available today. Some, like the robotic training aids in use to teach people CPR, medical diagnostics, dentistry and even child birth are fairly sophisticated. The Geminoid line of robots are incredibly life-like.
If you’re wondering whether this was in fact a real robot, or actually a person pretending to be a robot; it is not a fake. This is the latest iteration of the Geminoid series of ultra-realistic androids from Japanese firm Kokoro and Osaka University mad scientist roboticist Hiroshi Ishiguro. Specifically, this is Geminoid DK, which was constructed to look exactly like associate professor Henrik Scharfe of Aalborg University in Denmark.
Prof. Scharfe confirmed: “No, it is not a hoax,” adding that he and colleagues in Denmark and Japan have been working on the project since 2010. His Geminoid, which cost some US $200,000, is going to be used for studying human-robot interaction, in particular people’s emotional responses when they face an android representing another person. Prof. Scharfe wants to find out if the robot can transmit a person’s “presence” to a remote location and whether cultural differences in people’s acceptance of robots make a difference.
Have you seen the movie Surrogates?
At this point the makers of Geminoids are focusing on realistic facial expressions and human interaction. These are not real robots/androids in that they are teleoperated and do not process information to formulate a response, but the possibilities are immense. Read on.
Earlier this month (July 2013) Boston Dynamics unveiled Atlas, their contender for D.A.R.P.A.’s Robot Challenge, which aims to develop robotics hardware and software that can be used to handle extreme emergencies, such as an accident at a nuclear power plant. Boston Dynamics has been working on the LS3 Big Dog, a robotic pack animal for the military that can follow a human companion while carrying a heavy load AND remain totally balanced and stable in all terrains – even on ice. Much of that stability research has been rolled into the bipedal Atlas. Atlas has been given arms with interchangeable hands to handle a variety of tasks. Its sensory input is astounding!
CHARLI, who won the 2011 Robo Cup soccer challenge was the USA’s first full size, fully autonomous, bipedal robot. All its power systems and software are on-board, it is aware of its surroundings and uses optics and laser range finders to interact with those surroundings. Fast it isn’t, but it had to be programmed to locate the soccer ball, dribble it across the “field” then aim and kick the ball into a goal net. The goal was defended by his opponent, but not defended very well. All this takes some really sophisticated programming.
And of course there is Asimo; Honda’s autonomous bipedal robot that has been in development for close to 30 years and is considered the world’s front runner as a truly functional household helper. And Honda finally released a video done in English, aimed at the American home market.
Add to this collection the robot baby developed by the University of California San Diego where researchers hope a humanoid robot can help in therapy sessions with special needs children. What makes this one special is that it “learns”, not through software upgrades, but by interacting with people.
Learning is in fact a primary objective for any future robotic household assistant. Hard coded instruction is fine for a robot’s equivalent of our autonomic systems (heart beating, breathing, eyes blinking as needed, etc.), the robot’s mobility and sensory input, but relying on such instructions will never allow a robot to anticipate the needs of its owner or to deal effectively with a new situation.
All of this establishes a sort of base line. As an automotive analogy, these robots represent the Ford Model A. What do we need to do to build the robotic equivalent of a Ford Taurus? The goal is a robotic servant and companion that can perform household duties as well as (or better than) the resident human. A robot nanny, caregiver, maid, cook and companion.
The largest obstacle to robotics and artificial intelligence at the moment is the lack of number crunching ability. We talked about the immense challenge of programming CHARLI to scan for a ball, position himself in relation to the ball, move the ball across a playing area and kick it into a goal. This is because not only to they need to program the movements, but he has to do things we take for granted, like maintaining balance, even when standing on one foot while the other prepares to kick the ball. The last video above discusses the task of grasping a drinking glass and lifting it to our lips. It’s a piece of cake for us – unless we’re physically challenged – but for a robot it’s another story. Those folks found that using proximity sensors in the robot’s hand rather than optical or laser scanners and a huge set of instructions greatly simplified the task. Now it can “feel” the glass just as we would.
While the world waits for the Quantum Computer to be perfected and made available, A.I. researchers are exploring some interesting alternatives that make better use of the computing power that is available.
A quantum computer, by the way, uses quantum bits, called qubits, to crunch through its operations. Just like bits, qubits can represent either a zero or one, but the real juice is in their third state, called the “superposition” – they can represent both one and zero at the same time.
This quirky ability means that the same string of qubits can represent lots of different things simultaneously. For example, a set of two qubits in superposition represents four possible situations at the same time – [0, 0]; [0, 1]; [1, 0]; or [1, 1]. 
Do you find this hard to follow? That’s okay! Some of the very intelligent people who study it for a living are just as perplexed. In fact word around the lab is that no one really knows why a quantum computer works, they just know that it does. Call it magic if you like. But once perfected, a quantum computer will be capable of running calculations millions of times faster than anything we have available now.
In the mean time, one alternative is to farm out much of the computing to an external computer stored in the home that is accessed by the robot via a Wi-Fi connection. This allows for a more powerful processor than could be placed inside the bot due to size and power constraints, but introduces the problem of signal lag. Developing a more efficient Wi-Fi connection will help, making it dedicated to the bot will be required. If it used the household Wi-Fi signal and all the kids were on their tablets watching music videos, dinner might be greatly delayed!
Another option is to put dedicated programming into interchangeable modules. Keep the basic programming hard coded, then have modules that can be plugged in to prepare chicken teriyaki, or to take the kids upstairs, tuck them into bed, read them a story and turn off the light as it leaves, or to play a game of chess with you, or make a run to the super market for more bread and milk. Choose a specific task, plug in that mod, take it out when done and fit in another.
But, once again, hard coded instructions have severe limitations when it comes to dealing with the unexpected. If the program has to include if-then instructions for every conceivable outcome of every action, the code rapidly becomes immense. Using learning algorithms is actually more efficient and flexible. Making that learning modular, makes a lesson applicable to multiple problems, allowing the android to “reason” much as we do.
Psychology & Ethics of Robots
Asimo is humanoid, but not human realistic. The Geminoids are human realistic. I’m sure both types of robot will be made available for people of differing tastes. Some will relate better to a robot that looks more like a big toy, some to one they may forget is not human. Will human-realistic robots start a wave of people abandoning other humans in favor of synthetic people? Yes, yes it will… in fact it has already started. Multiple print and TV news pieces have been made about men (and a few women, but mostly men) who have chosen to have a full time relationship with their… ahem… synthetic love partner. It started as being a sex doll, then (these people claim) “love” bloomed and now they dress them up and carry their “partner” around with them, treating them like a living, disabled human partner. To be clear I am not talking about those silly inflatable things. Several companies, most notably the Japanese company Orient Industry, are producing skin-on-bone dolls that are so realistic in looks and touch that, except for the fact they don’t move (on their own … yet), you would not be able to tell they were fake. An American porn company is producing a robotic doll named Roxxy that does move: in limited fashion, and will “converse” with you.
Is this freakish or is this becoming more common? In February of 2013, NOVA asked, “Imagine that you could order up a robotic boyfriend or girlfriend. Would you do it?”
According to Sherry Turkle, an MIT professor who studies the social impacts of science and technology, more people than ever are answering “yes” to this (currently hypothetical) question. After all, human partners are fallible; they disappoint; they get angry and distracted. Robot mates, on the other hand, are endlessly attentive to our needs. They have no other demands on their time and attention. We cannot frustrate or disappoint them. They require no risk and no compromise.
On a slightly less creepy note, how many people do you know who break out in hives if they get beyond 5 feet of their cell phone? I know several, and I don’t get out much! People do develop strong attachments to their gadgets. When those gadgets start looking and acting like people, that will only escalate. Not all will be expecting or wanting a physical relationship, but even a “friend” who comes without the baggage of a human friend is appealing to some. I wonder what that will do to the fabric of society! Of course, if the AI folks do their jobs really well, even the androids will have neuroses and “baggage”! Then they will cease to be robots. They will be synthetic humans. Cylons?
Another problem is an age old (literary) question as well: do they have rights? In the beginning the answer will be “No, they are machines and have no more rights than a toaster or lawn mower.” But that will change as they improve and develop a consciousness, even a synthetic consciousness, activist groups will be lobbying to extend them all the privileges and rights that humans have. And if this is stricken down, does android ownership constitute slavery? What sort of relationship do you suppose will take the place of “ownership” – employee, adoption, what?
The movie Bicentennial Man addresses some of these issues in a very thought provoking way.
How Far Away?
Honda’s own Asimo promotional videos are proof that personal robot helpers are not far off. They will start with being the gofer-fetcher, check on so-and-so, mobile information service, set the table, get the mail, remind me to take my medicine, sort of assistant. These could roll out almost any time. Giving them the physical strength and dexterity to get an elderly person out of bed and into a wheel chair and the computational power for true Artificial Intelligence will take another decade or two. And they will be expensive: well into 6 figures, at least to start out.
Like any new technology, the first wave is only for the well-off. Once the development and manufacturing costs are recovered and the process streamlined, they will start to be affordable enough for some of the rest of us.
Potential For Evil
Lets start with a minor evil: unemployment. Once robots can be mass produced, they will certainly be used to replace people in the workplace. First in hazardous tasks, then increasingly in other venues as employers discover the cost effectiveness of a workforce that doesn’t unionize or demand wage increases. Here’s an interesting (and humorous) look at why this might actually be a good thing from a fellow who says, “I for one welcome our new robot overlords” – including Ferbies?
The world is rife with robot apocalypse stories. And that is certainly a possibility: if we succeed in building an entity that can learn and think for itself (which is a main goal) who is to say that it won’t decide it doesn’t like us? If they share a collective wi-Fi consciousness (or even if they don’t) who’s to say they won’t conspire to turn the tables on the human population and take over as the dominant species?
In a word: programming.
If all robots/androids are programmed with a fondness and respect for humans in the base-line code, (Asimov’s 3 Laws of Robotics) as unalterable instructions, humans should be quite safe. The problems will come when people (hackers and evil genius’s) start monkeying with robotic systems. And you know they will.
Of course military robots will not be programmed to love us, they will be programmed to kill. The risk of these running amok and turning on us is much greater.
But, if a robot revolution arises, it won’t be because of the robots, it will be because of people. Even the most amazingly helpful device humankind devises can be perverted and turned lethal. But, that’s us: people do that. To eliminate that threat, we must first root out that trait in ourselves.
Here are a few movies about robots that I especially recommend as insightful and thought provoking.
- Part 1 and Part 2 of this series
- IBM Research on Artificial Intelligence
- Fully Autonomous War Plane Can Even Land On A Carrier
- Artificial intelligence system has IQ of 4-year-old
- Smart Hands Research for Robotics and Prosthetics
- Autonomous Steering Cars
- Awesome Robots from ICRA 2013
- Robot Evolution
- Why Human-like Robots Creep Us Out