Virtual Emotion for Robot - Towards Human Support Robot

May 17, 2018

Robots do not have volition (free will) yet, so they are subject to human ethics as they are controlled by humans. But once they have volition they will have their own ecological system. Ethics is a criterion for diagnosing good behaviors from bad ones of the existence in the same ecological system. Once robots become members of an ecological system that is different from ours, human ethics becomes no longer applicable to them.

KEN TOMIYAMA: Thank you.

I found out that probably I am the only robot technician here in this group, so I was going to talk more about the robots than ethics. But, after I heard the talks yesterday, I decided to add some of my thoughts on ethics into my presentation. The first half of my presentation is just about robotics and the other half is going to be a combination of robotics and ethics.

This is today's agenda. First, there are many ideas about what a robot is, so I decided to talk about what the robot is; and then on what is emotion, because my own research is the development of emotions for robots, so I want to talk about emotion.

The second one, robots. I'm going to show you some robots, which are typical robots, that have to do in some form with emotion.

Then, emotional studies; and 4 and 5 are the highlights—virtual Kansei, virtual emotion, volition, and ethics. The fifth one is the one I added. The key topics of my talk today would be 4 and 5.

First, robot and emotion. What is a robot? There are many answers for that. People have different images of robots.

One answer is "a machine that helps people." Certainly, a robot was conceived as it is. My answer is "a useful, clever tool." The keyword is "tool." It is a tool.

I belong to the Future Robotics Research Center at Chiba Institute of Technology, and there we say, "A robot is a machine that feels, thinks, and moves." If you combine those three together, that is a robot. That is our central idea of the robot.

How do robots feel? Of course, they use sensors to measure. How about thinking? Processing by computer, that's AI. And how they move? By actuating their joints with, most of the time, motors. But the feel has a different meaning recently. The new understanding of feel is emotion and sensibility.

Do robots have sensitivity, or emotion? I would like to know your answers now, but I'm going to talk about it a bit more.

What is emotion? As well as robots, emotion has lots of different meanings to it and people have different ideas.

Is emotion what makes a human a human? This is certainly correct. Mental functions particular to human? Yes. Shortcut in signal processing in human? Many decisions are made not based on the logical reasoning but by emotion, right? This decision-making process is very quick, so it's a shortcut. It is also conceived as a part of Kansei.

Kansei is a Japanese word that encompasses many different things, like affectiveness and sensitivity, etc., etc., and emotion is a part of Kansei. Actually, emotion is the easiest function among those Kansei features. An old Chinese saying claims that emotion is one of the three principles—intelligence, emotion, and will, that decide human actions.

The other understanding of emotion is that it consists of joy, anger, fear, disgust, sadness, and surprise. This was found by Ekman, a very famous psychologist, and he proved that this set of six emotions is common to many different cultures.

Typical misunderstandings. Well, to understand something the easiest way is to show your misunderstandings. If a robot has emotion, it can understand human emotion. What do you think? My answer is no. They are different functions. Knowing the emotion and having emotion is different.

Next one. If a robot has emotion, it can be kind to humans. What do you think? Of course, no. Emotion is not intelligence. You have to have intelligence to know how to be kind to humans.

If a robot has emotion, will it be able to decide what it does by itself? Of course not. That is volition; that is will.

Now, the basic question is: Can a robot have emotion? Before the ethics, this was my big question: Can a robot have emotion?

Fact one, we are not machines; fact two, emotion is specific to humans. Therefore, a robot does not have emotion? What do you think?

Those are the induced questions. Do robots need emotion? Do you think they need emotion? Do we want robot's emotion, or do we want emotional robots? Do we?

In the first place, is machine emotion a viable concept? If there is one—well, let's assume that there is one—then what it is good for, or how can we make it? Those are the questions. Especially the last one, since I'm an engineer. I'm very interested in how we can make one, if there is one. What are your answers?

All right, let's move on to robots. There are many different types of robots. Some of them Go already showed you. I will show you some other ones, and some of ours as well.

Buddy, JIBO—I guess you are familiar with JIBO—and Zenbo. This is FURO-i. Although the name of the laboratory where I work is fuRo, this is not ours.

That is TAPIA developed in Japan. This is CHiP and this is PARO. PARO is also Japanese and is used in welfare facilities. MUSIO plays music for you. AIDO—I forgot what it does—Aisoy, and many more. Those are called "social robots." What they can do is to be a companion for human beings. That's their major function.

Robots at fuRo, are not social robots. (laugh) This one is not our robot but one of us wearing a hat-like thing, which is the cover of the robot he developed. Actually, those are our robots. His hat actually came from this robot over here.

We call this morph3 a "metal athlete." This was built in 2003, a long time ago. Then Rosemary and Sakura, those are the ones that went in to the Fukushima nuclear disaster site and collected data. Rosemary was the first robot ever to go in and to collect data.

This Halluc robot and HULL the controller is really a beautiful set. We spent a lot of money on that (smile). The controller is this one here. He is sitting in it. By the way, the guy you saw in the previous slide is this guy over here. You control this robot using this controller.

This is ILY-A, a transforming vehicle, which I would like to show you how to move and transform. This is ILY-A. [Video plays]

This one was built about three years ago. The driver is cute, isn't he? He is one of us, a very skillful member. ILY-A transforms into four different forms with different functions. You can ride it as a motorcycle, like seen here, and of course it has sensors so that it won't collide with humans or other obstacles. He is not just riding on it but using it as a scooter here. He is having fun. I'm sure about that.

Then, you can change it to the cart mode and carry baggage like this. So, this is really a fun vehicle. We call ILY-A a future mobility. All right, this is it.

Let me come back to emotion—since I'm studying emotion, let's look at more emotion studies.

One of the famous series of emotion studies in Japan is done by Waseda University. They created lots of robots with so-called "emotions."

This one is called Kobian. This one has the six emotions defined by Ekman: surprise, grief, joy, sadness, anger, fear, and neutral. What do you think? Kobian really looks angry when in the anger mode. Do you see which one is anger? This one here, right? This robot can show you the so-called "emotion," but really?

This is a very famous work I guess that you guys are familiar with. Cynthia Breazeal at MIT came up with this one called Kismet. If you look at the face, it doesn't look like a real human face, but it sure can express emotions.

This one, PARO, was already shown to you, but if I may— [Plays video]

Sorry that we all have to look at the commercial. This is a baby seal. It is really cute. I have PARO in my lab. This one is used at the care facilities. Those people who are in the wheelchairs can enjoy being with those machines, tools, or toys should I say.

This is the guy who built this robot.

The interesting part about this robot is that it has learning capability, so if you treat it well, then its behavior becomes nicer; but if you just kick it and do nasty things, then it will become very nasty. Anyway, that's PARO.

This Pepper is very—how shall I say?—popular in Japan. Although Pepper is much bigger, it and PARO are sold for about the same price.

Now move onto what virtual Kansei or virtual emotion is. I say that those robots are able to express their emotion. Really? Is that really true?

Well, my claim is that, given that the original concept of a robot is a human helper and that future robots will have to work very closely with people and anthropomorphism is part of human nature, therefore, robots need emotion to be friends of ours.

But who needs angry robots or sad robots? Marvin in The Hitchhiker's Guide to the Galaxyprobably most of you know this. On the right is the older version; on the left is the newer version of Marvin.

Do you need one? No one, I hope—well, at least, I don't. It would be useless or dangerous or both. That's what I believe.

Then why do robots need emotions—or emotion-like functions? I am not going to say "emotions"—emotion-like functions, such as showing anger. Why do we need the distinction?

This comes from my application of my robots. I am developing a robot which can be used in welfare facilities. My robots will be very close to human beings, and my robots have to understand the status of the cared person, not only their physical status but also mental states. Therefore, the performance of my robot depends heavily on how it is compatible with the cared person. My robot must account for the taste and emotional tendency of the partner. Many cared persons prefer my robot to be happy. Then again, some of them prefer my robot to be angry or to behave like it is angry. It all depends on the cared person.

So, my answer is: A robot is a machine, no matter how useful and clever it is, it does not have emotion. However, a robot can have emotion-like function, a function to let humans see and feel emotion in robots. That is all you need, actually. A robot doesn't have to be emotional. Therefore, I use the words "virtual emotion." It is not a real emotion, but robots have this capability so that we can feel emotion in our robots.

On the virtual emotion—I will show you two or three slides that have to do with my own research.

Mimicking human emotion transition, meaning that robots should be able to express humanlike emotion. And taking a partner's emotion as input because that's what we do. Our emotion is heavily influenced by the emotional state of our partner. My robot is going to be very close to the cared person, so it has to be able to respond to the emotional state of the cared person.

Two-layered construction of virtual emotion—we skip this slide.

Virtual emotion consists of a detector that detects the partner's emotional state, a generator to generate robot emotion, and a modulator that modifies robot behavior to express the robot's emotion.

The emotion generator has a manipulatable structure for creating the personality of my robot, meaning that I want to modify the way the robot emotion is generated so that my robot can have high compatibility with the partner. In other words, my robots can behave more like Seven Dwarfs with different emotional characteristics.

This may be too technical, but this is the controller of my robot. The bottom portion is the logical part which every robot has. The top portion is the new one I am giving my robot, an emotion detector, an emotion generator, and Kansei expression modulator.

This small central module, this is the virtual emotion generator, but I'm taking this whole thing as the wide-sense virtual emotion.

Now, applications for robot emotion. As you can guess, the welfare field is the one I'm talking about.

In welfare-related applications, robots must work very, very close to humans. Therefore, they need metaphysical as well as physical interaction capability and robots in the welfare field really need virtual emotion. Incidentally, I send all of my students to care houses and let them have experiences of helping cared persons by themselves. They realize the importance of knowing the emotion of the cared person and the need to modify their own emotion accordingly.

This is the diagram showing my robot in a care facility. Probably the keyword here is a "caregiver support" robot. I am not going to build a care robot that replaces human caregivers, but I am building a robot that works with human caregivers and that is why I call my robot a caregiver support robot.

This part represents the interaction between the caregiver and the cared person. This part is very important, and I say we shouldn't get away with that. A human in care cycle is indispensable. A human cannot be abolished and should not be replaced. That's my standing point. The caregiver support robot assists the parts of the work of caregivers that are suitable for robots. By doing that, a single caregiver can take care of more cared persons.

So, my robots are not taking away human jobs. They are helping human caregivers so that they can take care of more cared persons. That is exactly what we need in Japan. We have a large shortage of caregivers. The total quality of care of nursing is improved with my caregiver support robot.

Now, the volition and ethics. This is the part I added the last minute and I apologize that the slides in this portion are almost words only.

There are three phases of robot ethics. This comes from the book called Robot Ethics, with which I agree completely. That is why I'm citing it here.

  • The first one is the professional ethics of roboticists, the people who build robots.
  • The second one is a moral code programmed into the robots themselves. So somehow you develop a code which has moral content in it. The moral code is for the robots not for the roboticists. The important part is that the code is followed by the robot, not the person who makes that robot.
  • The third one—this is the most interesting one—the self-conscious ability to do ethical reasoning by robots according to robots' own self-chosen moral code. Here the prime mover is not the person; the prime mover is a robot.

An old Chinese saying—I cited this before, but I believe this is very important so I'll cite it again. Chinese people are very good at coming up with new concepts and we are very good at following them. (Laugh)

There are three principles that define human actions. One of them is intelligence, of course; the other one is emotion; and the third one is volition. As Hiroi-san pointed out, the oldest brain takes care of volition-type things such as intuition; and then the next one takes care of the emotion. The newest brain takes care of intelligence. It is amazing that Chinese people somehow knew that the human action is controlled by those three.

We saw that intelligence and emotion are installed onto robots in the form of artificial intelligence and virtual emotion, in my case.

How about volition? Volition means free will. Ethics is a criterion for diagnosing good from bad. Ethics can be discussed only when there is a freedom of choice. A robot without volition does not make choices based on free will but based on programmed commands; therefore, a robot without volition is not subject to ethics.

Then, what is the ethics we are discussing here? It must be the ethics of the person who commands the robot or developed it to make a choice. This is at the first phase that I talked about. The responsibility is on the person who developed or is using the robot. Yes, the person is subject to human ethics, of course.

Then, what can we say about a robot with volition, a robot with free will? The robot can decide its behavior with free will. Ethics is a criterion—I'm repeating myself again—to decide good and bad or proper behaviors. Can we decide this? The key word is "we." Are robots with volition subject to human ethics? Are they?

My answer is no. A robot with volition is subject to robot ethics. The act of applying human ethics to a robot with volition is based on the concept that the robot is a slave of humans or something which we control, not an independent existence.

In fact, an existence with volition has its own ecology, so therefore once we bestow volition on robots, they become independent of human beings because robots decide their behavior at free will.

An existence with volition has its own ecological system, as I said before. Ethics is a criterion for diagnosing good behaviors from bad ones of the existence in the same ecological system.

Once robots become members of an ecological system that is different from ours, human ethics becomes no longer applicable to them. This is what I claim.

What happens to a human system when robots have volition and their own ecosystem? This is a very interesting question, right? Terminator comes in here.

It is difficult to conceive because humans have never experienced the existence of an ecosystem that is as advanced as human beings. We never had that. In fiction we do, but not in the real world.

Is it inevitable for two ecosystems to fight for existence? So, is Terminator inevitable?

It seems to me that this meeting here is to discuss whether two independent ecosystems can coexist on a common environment, namely, Earth. I believe it is possible if both us and robots respect each other's ethics. If we want robots to respect human ethics, we need to respect robot ethics as well.

An example situation: A robot and human are in danger. A second robot must determine which to save. Saving the human of course is the correct solution according to the human ethics; but saving the robot may be the correct solution according to the robot ethics.

In conclusion, I would like to show this slide, which is a summary of what I presented to you.

Thank you.

You may also like

DEC 17, 2024 Feature

Empowering Ethics in 2024

Explore Carnegie Council’s 2024 Year in Review resource which highlights podcasts, events, and more covering some of this year’s key ethical issues.

Dr. Strangelove War Room. CREDIT: IMDB/Columbia Pictures

DEC 10, 2024 Article

Ethics on Film: Discussion of "Dr. Strangelove"

This review explores ethical issues around nuclear weapons and non-proliferation, the military-industrial complex, and the role of political satire in Stanley Kubrick's "Dr. Strangelove."

DEC 3, 2024 Article

Child Poverty and Equality of Opportunity for Children in the United States

This final project from the first CEF cohort discusses the effects of child poverty in the United States and ethical solutions to help alleviate this ...

未翻译

此内容尚未翻译成您的语言。您可以点击下面的按钮申请翻译。

要求翻译