Main

Paper Summaries Archives

October 21, 2002

Brief Chapter 1 Thoughts

They talk about intonation… A good example is the scene in Three Men and a Baby where he’s reading a wrestling magazine to the baby.

Also, it talks a lot about physical states representing emotional states, and gives (at one point) the example of someone who’s feet sweat when they’re nervous. However, it seems to me that whether someone’s feet sweats or not should not be something that we’re trying to figure out. Really, it’s irrelevant to my model of that person’s internal state. The robots should be autonomous and it seems fairly pointless to be able to determine someone’s emotional state if the only way we can do it is by hooking the person up to complicated sensors. It seems more practical to be able to (or to try to) model emotional states by non-tactile means.

October 29, 2002

Part 1 Retrospective

I finished reading part 1.

This chapter brings up some very interesting thoughts, and some big challenges to how I thought emotion recognition “aught” to be done. I originally thought that computers should work the way we do - recognizing what little emotion we can recognize simply by nontactile means. This, of course, is very limiting and with the increasing ubiquity and portability of computers, not necessarily required (hey, if we can cheat, lets! Movies are only 32 frames a second, after all).

In particular, the chapter raises the problems of culture, gender, age, and knowledge of the observer (read: inhibition) — essentially, to sum all of these, context — in affecting how people express emotions and how to interpret what is eventually expressed. The solution the chapter proposes is similar to how voice recognition has made it’s life easier - by focusing on a single person, training it to recognize a single person’s emotions.

Then the chapter noted how cognition futzes with our emotions. For example, the person who’s hot and irritable gets hit in the back of the legs really hard. Upon turning around, feeling all mad, he discovers it was a woman in a wheelchair who lost control, and doesn’t feel mad anymore. The chapter introduces the idea of primary and secondary emotions, or as I like to think of them, tier one and tier two emotions. Tier one emotions are the knee-jerk “emotions” - fear, quick anger, startle, etc. Tier two emotions are the ones that come only with a little bit of thought - grief, slow anger, sorrow, contentment, etc.

It occurred to me, thinking about the question of “why have emotions” that perhaps there are two different reasons, depending on the tier. Tier one seems to obviously be some sort of cognitive shortcut (as it happens without cognitive intervention, as a result of physical somatic responses), to help in survival instincts. Tier two, however, seems more tied to learning than anything else (there was an example in the book of a man who could not experience tier two emotions, and as a result could not learn from his mistakes). Of course, this learning introduces the deadening effect (if you are exposed to a given stimulus too much) and the possibility for emotional detachment - something which doesn’t seem so possible with the primary emotions. Think of, for example, the emotions you are able to suppress when giving a scholarly presentation, and which emotions you would not be able to suppress.

As far as emotion detection goes, I noted from one of their examples that the subtleties involved in emotional perception really make all the difference. For example, something as minute as a person’s gait, or something akin to a quick twinkle in their eye may completely invert our perception of the person’s emotional state.

Then a fair bit of the chapter was devoted to things that seemed ancillary, like the observations that positive emotions help creativity, and the observations that willfull emotional expression and pure emotional expression take different routes through the brain (e.g. brain damage patients who can smile at a joke, but not on command, and vice versa). One interesting conclusion they didn’t specifically state but seems obvious is that sympathy seems to stem from the way we remember things - we remember happy memories easier when in a good mood, and vice versa, thus, when a friend is in trouble we can more easily think of times when we were in trouble.

January 9, 2003

Architecture-Based Conceptions of Mind

Most of Sloman's work seems to be trying to make his point that everyone else in the AI or philosophy-of-mind fields are doing things wrong, and are treating the mind more simplistically than it really ought to. Surprisingly enough, he likes his model. Sloman's model combines the three-column appraoch (sensors, processors, affectors) with the three-layer approach (reactive, deliberative, meta-deliberative) in a 3x3 grid, and adds a pervasive alarm system (simplistic, reactive alarms) that any layer can trigger events in any other layer. Sloman describes how new the concept of "information processing" versus "matter and energy processing" is, and how rife it is with "cluster concepts" that are ill-defined. Gradually, he proposes ways of thinking about these cluster concepts (for example, not thinking of things as continuum, but rather as clearly delineated by many small and less significant divisions). Part of his thoughts about information processing systems as opposed to matter and energy processing systems is that "the semantic content of symbols used in such cases is only partly determined by internal mechanisms" - in other words, part of the semantic meaning of the symbols used by information processing systems is contained in external things (namely, the environment). Sloman makes an interesting claim as far as the causality between the mental and physical entities, and draws a correlation to virtual machine systems. Essentially, the point he makes is that the mental body (mind) affects the low-level in the same way that a software program affects the bits and voltages of a computer (a software program that may contain a virtual machine, and thus may emulate or simulate what might actually happen rather than making it happen). He makes the statement that "causal completeness at a physical level does not rule out other kinds of causation being real", essentially making the argument that the mental entity is simply a logical abstraction with it's own causes and effects that are reflected in the lower-level only because that is how it is implemented—and because the lower-level is a full implementation, it has it's own full-causation, but this is simply the implementation of the larger logical ideas, structures, and processes (architecture) embodied in the mind (mental entity). Sloman sounds a little grandiose at times, and has a tendency to make rather inflammatory remarks (where they read by the right people), and seems entirely confident that the current problems that are being faced by AI researchers in their quest for human emulation will be surmounted. He makes the claim, at one point, that robots will one day wonder whether humans are conscious. And, they may, although it seems to me that his pet architecture lacks the structures to properly "wonder". He also makes a stab at killing the common "Zombie" argument, but doesn't make any more compelling an argument than any other attempt at the same, and even recognizes this by stating "Of course, those who simply do not wish to believe that humans are information processing systems for theological or ethical reasons will not be convinced by any of this." although his reasons fall a little short, simply labelling conflicting opinions as incomprehensible. Finally, in his conclusion he brings up the example of chess-playing programs, or theorem-provers, and states that they don't enjoy or get bored by doing what they do because they lack the architectural structure for that, even though they may be programmed to give the appearance of such emotions. Oddly, he doesn't take the opportunity to put any more reason behind this than simply the semantic definition of want as an architectural function, and doesn't confront the simple question of whether even such an architecturally designed program is simply programmed to give the appearance of wanting or caring.

About Paper Summaries

This page contains an archive of all entries posted to Kyle in the Paper Summaries category. They are listed from oldest to newest.

Cool Stuff is the previous category.

People Suck is the next category.

Many more can be found on the main index page or by looking through the archives.

Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered by
Movable Type 3.34