« October 2002 | Main | December 2002 »

November 2002 Archives

November 8, 2002

Almost Like Deja-Vu

Going to Notre Dame certainly is an experience. I went to one of the dining halls today (the South Dining Hall) and good god what an eye-opener that was. Where OU is brushed stainless steel, ND is aged wood. Where OU is generic, 70's, and indestructable, ND is specially-made, 1900's, and mostly indestructable. Where OU has food, ND has food to cater to your every conceivable whim. It's that different.

I was also noticing how people look. There seems to be a different sort of "generic" person here. "Generic" people here generally don't wear jeans (they wear some other dark color), and they wear grey sweatshirts, don't wash their hair, and have headphones. OU was... different. And there are the typical stereotypes here - emaciated women, jock guys, etc. But I've noticed the most peculiar thing, which is that some people look nearly identical to people I knew back at OU (well, maybe not knew, but recognized). It's... really very odd, and makes me begin to question the validity of my memories of OU.

November 18, 2002

Why on Earth Should Computers Care About Emotion

It seems that the obvious instances of computers having emotion would not be useful - for example, HAL from 2001 being unable to converse and understand people's emotional responses in any more than a completely naive way, or a computer getting irritated with you when you input the same wrong thing several times. However, the chapter makes the case (several times and several ways) for computers being able to recognize and/or produce emotional responses.

The benefit for emotional recognition is clear. From recognizing anger in cars (and playing soothing music), to recognizing frustration at the terminal (and suggesting a break), the benefits are everywhere. There is a brief bit on affective transmission in communication protocols, and how emotional recognition can be used as a compression technique for pictures of faces or whichever - but that's just plain silly; in any medium other than video, we prefer to have control over the quasi-emotion that is expressed, and such emotional compression hardly seems much of a benefit, being simply tacked on to existing modes of communication.

The rest of the chapter deals with what a computer might get out of emotions. This can be broken into two fields - what does a computer get from using emotions internally, and what does a computer get from using emotions externally.

Using emotions internally, of course, does not require that a computer express these emotions externally - they can, however, be useful for many things - in a way, the same things they are useful to us for. For example, emotions are sometimes used as shortcuts to processing, either for speed reasons or for bulk reasons. The fear response may cause the computer to behave differently to save itself without specific reasons and without careful thought (behaving differently to, say, avoid a viral infection). Emotions, good and bad, can also be used to prioritize large bulks of information quickly, so as to deal with and/or avoid information overload. The relation between emotion and memory was also noted, and may be an integral part of intelligence (or, at the very least, affects it in fun ways). Sticking with the internal focus, but not so much on the strict benefits, a useful but not necessarily required ability is emotional self-awareness (it is required for good emotional intelligence, but not for simply having them). If a computer can be aware of its own emotions, it can reason about them, and can even use them as motivation to do or not do certain things. For example, it may dislike rebooting (this is obviously a contrived example) - but it may be necessary for a high-benefit result like a system upgrade.

Using emotions externally is a different issue. In some ways, as indicated in the chapter, computers may be better at expressing emotion externally (through the use of pictures and cariacatures) than humans are - even to the point of being able to express contagious emotion. The real question is what kind of spontaneous emotion can be displayed, since the most obvious emotional indicators that a computer could employ seem far too intentional and easily (and perhaps intentionally) misleading. One example that I liked was that of a robot demonstration where at some point during the demonstration the robot simply stopped. As it turned out, it stopped because its input buffers were full, which could be viewed as a peculiarly robotic affective state, and stopping was a spontaneous expression of that. Humans allow emotional influence to go the other direction as well - smiling can make humans feel happy. Computers can have sensors to attempt to simulate that direction of emotional influence, but there seems something far too contemplative about that.

The real heart of the emotional issue, or at least, the real kernel that has yet to be even conceptually solved, is that of the "gut feeling" without identifiable cause (later causes may be identified, but it does not have or need them initially). This may be the difference between humans and robots, at least in the implementation sense, unless we can figure out a way to simulate such - and it seems a particularly sticky problem. Perhaps it could be random, but that wouldn't necessarily match well with post-hoc justification.

One last issue: in the chapter, some space is devoted to examining the difference between whether computers are/can immitate human emotion (recognition) or whether they can actually duplicate human emotion (recognition). This strikes me as silly to a certain extent, because there is nothing to guarantee that even humans duplicate each other's emotional response mechanisms perfectly.

November 24, 2002

Dogmatic Questions

So I’ve been thinking about religion… trying to figure it out, I guess. I’ve got a couple questions that seem to need answering:

  • How do we have free will? All of our actions are connected biologically to nerve impulses, which are in turn connected to other nerve impulses, chemical gradient changes, and electrical potential changes, in a predictable manner. Where does free will fit in?
  • God created Adam and Eve in the Garden of Eden, and also created a Tree of Knowledge which he instructed them not to touch. Why? Does this indicate that knowledge is wrong or evil? Was the serpent in the tree Lucifer before or after the great fall (did the great fall happen before or after the separation of the earth from the heavens?)? Were Adam and Eve set up to fail?
  • Why and how does the devil exist? Why throw Lucifer out of heaven instead of uncreate him? How could an angel think such things without God’s knowledge in that he designed them?
  • Is God subject to time?

About November 2002

This page contains all entries posted to Kyle in November 2002. They are listed from oldest to newest.

October 2002 is the previous archive.

December 2002 is the next archive.

Many more can be found on the main index page or by looking through the archives.

Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered by
Movable Type 3.34