Main

Artificial Intelligence Archives

October 21, 2002

Brief Chapter 1 Thoughts

They talk about intonation… A good example is the scene in Three Men and a Baby where he’s reading a wrestling magazine to the baby.

Also, it talks a lot about physical states representing emotional states, and gives (at one point) the example of someone who’s feet sweat when they’re nervous. However, it seems to me that whether someone’s feet sweats or not should not be something that we’re trying to figure out. Really, it’s irrelevant to my model of that person’s internal state. The robots should be autonomous and it seems fairly pointless to be able to determine someone’s emotional state if the only way we can do it is by hooking the person up to complicated sensors. It seems more practical to be able to (or to try to) model emotional states by non-tactile means.

October 29, 2002

Part 1 Retrospective

I finished reading part 1.

This chapter brings up some very interesting thoughts, and some big challenges to how I thought emotion recognition “aught” to be done. I originally thought that computers should work the way we do - recognizing what little emotion we can recognize simply by nontactile means. This, of course, is very limiting and with the increasing ubiquity and portability of computers, not necessarily required (hey, if we can cheat, lets! Movies are only 32 frames a second, after all).

In particular, the chapter raises the problems of culture, gender, age, and knowledge of the observer (read: inhibition) — essentially, to sum all of these, context — in affecting how people express emotions and how to interpret what is eventually expressed. The solution the chapter proposes is similar to how voice recognition has made it’s life easier - by focusing on a single person, training it to recognize a single person’s emotions.

Then the chapter noted how cognition futzes with our emotions. For example, the person who’s hot and irritable gets hit in the back of the legs really hard. Upon turning around, feeling all mad, he discovers it was a woman in a wheelchair who lost control, and doesn’t feel mad anymore. The chapter introduces the idea of primary and secondary emotions, or as I like to think of them, tier one and tier two emotions. Tier one emotions are the knee-jerk “emotions” - fear, quick anger, startle, etc. Tier two emotions are the ones that come only with a little bit of thought - grief, slow anger, sorrow, contentment, etc.

It occurred to me, thinking about the question of “why have emotions” that perhaps there are two different reasons, depending on the tier. Tier one seems to obviously be some sort of cognitive shortcut (as it happens without cognitive intervention, as a result of physical somatic responses), to help in survival instincts. Tier two, however, seems more tied to learning than anything else (there was an example in the book of a man who could not experience tier two emotions, and as a result could not learn from his mistakes). Of course, this learning introduces the deadening effect (if you are exposed to a given stimulus too much) and the possibility for emotional detachment - something which doesn’t seem so possible with the primary emotions. Think of, for example, the emotions you are able to suppress when giving a scholarly presentation, and which emotions you would not be able to suppress.

As far as emotion detection goes, I noted from one of their examples that the subtleties involved in emotional perception really make all the difference. For example, something as minute as a person’s gait, or something akin to a quick twinkle in their eye may completely invert our perception of the person’s emotional state.

Then a fair bit of the chapter was devoted to things that seemed ancillary, like the observations that positive emotions help creativity, and the observations that willfull emotional expression and pure emotional expression take different routes through the brain (e.g. brain damage patients who can smile at a joke, but not on command, and vice versa). One interesting conclusion they didn’t specifically state but seems obvious is that sympathy seems to stem from the way we remember things - we remember happy memories easier when in a good mood, and vice versa, thus, when a friend is in trouble we can more easily think of times when we were in trouble.

November 18, 2002

Why on Earth Should Computers Care About Emotion

It seems that the obvious instances of computers having emotion would not be useful - for example, HAL from 2001 being unable to converse and understand people's emotional responses in any more than a completely naive way, or a computer getting irritated with you when you input the same wrong thing several times. However, the chapter makes the case (several times and several ways) for computers being able to recognize and/or produce emotional responses.

The benefit for emotional recognition is clear. From recognizing anger in cars (and playing soothing music), to recognizing frustration at the terminal (and suggesting a break), the benefits are everywhere. There is a brief bit on affective transmission in communication protocols, and how emotional recognition can be used as a compression technique for pictures of faces or whichever - but that's just plain silly; in any medium other than video, we prefer to have control over the quasi-emotion that is expressed, and such emotional compression hardly seems much of a benefit, being simply tacked on to existing modes of communication.

The rest of the chapter deals with what a computer might get out of emotions. This can be broken into two fields - what does a computer get from using emotions internally, and what does a computer get from using emotions externally.

Using emotions internally, of course, does not require that a computer express these emotions externally - they can, however, be useful for many things - in a way, the same things they are useful to us for. For example, emotions are sometimes used as shortcuts to processing, either for speed reasons or for bulk reasons. The fear response may cause the computer to behave differently to save itself without specific reasons and without careful thought (behaving differently to, say, avoid a viral infection). Emotions, good and bad, can also be used to prioritize large bulks of information quickly, so as to deal with and/or avoid information overload. The relation between emotion and memory was also noted, and may be an integral part of intelligence (or, at the very least, affects it in fun ways). Sticking with the internal focus, but not so much on the strict benefits, a useful but not necessarily required ability is emotional self-awareness (it is required for good emotional intelligence, but not for simply having them). If a computer can be aware of its own emotions, it can reason about them, and can even use them as motivation to do or not do certain things. For example, it may dislike rebooting (this is obviously a contrived example) - but it may be necessary for a high-benefit result like a system upgrade.

Using emotions externally is a different issue. In some ways, as indicated in the chapter, computers may be better at expressing emotion externally (through the use of pictures and cariacatures) than humans are - even to the point of being able to express contagious emotion. The real question is what kind of spontaneous emotion can be displayed, since the most obvious emotional indicators that a computer could employ seem far too intentional and easily (and perhaps intentionally) misleading. One example that I liked was that of a robot demonstration where at some point during the demonstration the robot simply stopped. As it turned out, it stopped because its input buffers were full, which could be viewed as a peculiarly robotic affective state, and stopping was a spontaneous expression of that. Humans allow emotional influence to go the other direction as well - smiling can make humans feel happy. Computers can have sensors to attempt to simulate that direction of emotional influence, but there seems something far too contemplative about that.

The real heart of the emotional issue, or at least, the real kernel that has yet to be even conceptually solved, is that of the "gut feeling" without identifiable cause (later causes may be identified, but it does not have or need them initially). This may be the difference between humans and robots, at least in the implementation sense, unless we can figure out a way to simulate such - and it seems a particularly sticky problem. Perhaps it could be random, but that wouldn't necessarily match well with post-hoc justification.

One last issue: in the chapter, some space is devoted to examining the difference between whether computers are/can immitate human emotion (recognition) or whether they can actually duplicate human emotion (recognition). This strikes me as silly to a certain extent, because there is nothing to guarantee that even humans duplicate each other's emotional response mechanisms perfectly.

January 9, 2003

Architecture-Based Conceptions of Mind

Most of Sloman's work seems to be trying to make his point that everyone else in the AI or philosophy-of-mind fields are doing things wrong, and are treating the mind more simplistically than it really ought to. Surprisingly enough, he likes his model. Sloman's model combines the three-column appraoch (sensors, processors, affectors) with the three-layer approach (reactive, deliberative, meta-deliberative) in a 3x3 grid, and adds a pervasive alarm system (simplistic, reactive alarms) that any layer can trigger events in any other layer. Sloman describes how new the concept of "information processing" versus "matter and energy processing" is, and how rife it is with "cluster concepts" that are ill-defined. Gradually, he proposes ways of thinking about these cluster concepts (for example, not thinking of things as continuum, but rather as clearly delineated by many small and less significant divisions). Part of his thoughts about information processing systems as opposed to matter and energy processing systems is that "the semantic content of symbols used in such cases is only partly determined by internal mechanisms" - in other words, part of the semantic meaning of the symbols used by information processing systems is contained in external things (namely, the environment). Sloman makes an interesting claim as far as the causality between the mental and physical entities, and draws a correlation to virtual machine systems. Essentially, the point he makes is that the mental body (mind) affects the low-level in the same way that a software program affects the bits and voltages of a computer (a software program that may contain a virtual machine, and thus may emulate or simulate what might actually happen rather than making it happen). He makes the statement that "causal completeness at a physical level does not rule out other kinds of causation being real", essentially making the argument that the mental entity is simply a logical abstraction with it's own causes and effects that are reflected in the lower-level only because that is how it is implemented—and because the lower-level is a full implementation, it has it's own full-causation, but this is simply the implementation of the larger logical ideas, structures, and processes (architecture) embodied in the mind (mental entity). Sloman sounds a little grandiose at times, and has a tendency to make rather inflammatory remarks (where they read by the right people), and seems entirely confident that the current problems that are being faced by AI researchers in their quest for human emulation will be surmounted. He makes the claim, at one point, that robots will one day wonder whether humans are conscious. And, they may, although it seems to me that his pet architecture lacks the structures to properly "wonder". He also makes a stab at killing the common "Zombie" argument, but doesn't make any more compelling an argument than any other attempt at the same, and even recognizes this by stating "Of course, those who simply do not wish to believe that humans are information processing systems for theological or ethical reasons will not be convinced by any of this." although his reasons fall a little short, simply labelling conflicting opinions as incomprehensible. Finally, in his conclusion he brings up the example of chess-playing programs, or theorem-provers, and states that they don't enjoy or get bored by doing what they do because they lack the architectural structure for that, even though they may be programmed to give the appearance of such emotions. Oddly, he doesn't take the opportunity to put any more reason behind this than simply the semantic definition of want as an architectural function, and doesn't confront the simple question of whether even such an architecturally designed program is simply programmed to give the appearance of wanting or caring.

About Artificial Intelligence

This page contains an archive of all entries posted to Kyle in the Artificial Intelligence category. They are listed from oldest to newest.

Commentary is the next category.

Many more can be found on the main index page or by looking through the archives.

Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered by
Movable Type 3.34