« Almost Like Deja-Vu | Main | Dogmatic Questions »

Why on Earth Should Computers Care About Emotion

It seems that the obvious instances of computers having emotion would not be useful - for example, HAL from 2001 being unable to converse and understand people's emotional responses in any more than a completely naive way, or a computer getting irritated with you when you input the same wrong thing several times. However, the chapter makes the case (several times and several ways) for computers being able to recognize and/or produce emotional responses.

The benefit for emotional recognition is clear. From recognizing anger in cars (and playing soothing music), to recognizing frustration at the terminal (and suggesting a break), the benefits are everywhere. There is a brief bit on affective transmission in communication protocols, and how emotional recognition can be used as a compression technique for pictures of faces or whichever - but that's just plain silly; in any medium other than video, we prefer to have control over the quasi-emotion that is expressed, and such emotional compression hardly seems much of a benefit, being simply tacked on to existing modes of communication.

The rest of the chapter deals with what a computer might get out of emotions. This can be broken into two fields - what does a computer get from using emotions internally, and what does a computer get from using emotions externally.

Using emotions internally, of course, does not require that a computer express these emotions externally - they can, however, be useful for many things - in a way, the same things they are useful to us for. For example, emotions are sometimes used as shortcuts to processing, either for speed reasons or for bulk reasons. The fear response may cause the computer to behave differently to save itself without specific reasons and without careful thought (behaving differently to, say, avoid a viral infection). Emotions, good and bad, can also be used to prioritize large bulks of information quickly, so as to deal with and/or avoid information overload. The relation between emotion and memory was also noted, and may be an integral part of intelligence (or, at the very least, affects it in fun ways). Sticking with the internal focus, but not so much on the strict benefits, a useful but not necessarily required ability is emotional self-awareness (it is required for good emotional intelligence, but not for simply having them). If a computer can be aware of its own emotions, it can reason about them, and can even use them as motivation to do or not do certain things. For example, it may dislike rebooting (this is obviously a contrived example) - but it may be necessary for a high-benefit result like a system upgrade.

Using emotions externally is a different issue. In some ways, as indicated in the chapter, computers may be better at expressing emotion externally (through the use of pictures and cariacatures) than humans are - even to the point of being able to express contagious emotion. The real question is what kind of spontaneous emotion can be displayed, since the most obvious emotional indicators that a computer could employ seem far too intentional and easily (and perhaps intentionally) misleading. One example that I liked was that of a robot demonstration where at some point during the demonstration the robot simply stopped. As it turned out, it stopped because its input buffers were full, which could be viewed as a peculiarly robotic affective state, and stopping was a spontaneous expression of that. Humans allow emotional influence to go the other direction as well - smiling can make humans feel happy. Computers can have sensors to attempt to simulate that direction of emotional influence, but there seems something far too contemplative about that.

The real heart of the emotional issue, or at least, the real kernel that has yet to be even conceptually solved, is that of the "gut feeling" without identifiable cause (later causes may be identified, but it does not have or need them initially). This may be the difference between humans and robots, at least in the implementation sense, unless we can figure out a way to simulate such - and it seems a particularly sticky problem. Perhaps it could be random, but that wouldn't necessarily match well with post-hoc justification.

One last issue: in the chapter, some space is devoted to examining the difference between whether computers are/can immitate human emotion (recognition) or whether they can actually duplicate human emotion (recognition). This strikes me as silly to a certain extent, because there is nothing to guarantee that even humans duplicate each other's emotional response mechanisms perfectly.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

About

This page contains a single entry from the blog posted on November 18, 2002 2:16 AM.

The previous post in this blog was Almost Like Deja-Vu.

The next post in this blog is Dogmatic Questions.

Many more can be found on the main index page or by looking through the archives.

Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered by
Movable Type 3.34