« Thinking Politically | Main | Who is Accountable? »

Architecture-Based Conceptions of Mind

Most of Sloman's work seems to be trying to make his point that everyone else in the AI or philosophy-of-mind fields are doing things wrong, and are treating the mind more simplistically than it really ought to. Surprisingly enough, he likes his model. Sloman's model combines the three-column appraoch (sensors, processors, affectors) with the three-layer approach (reactive, deliberative, meta-deliberative) in a 3x3 grid, and adds a pervasive alarm system (simplistic, reactive alarms) that any layer can trigger events in any other layer. Sloman describes how new the concept of "information processing" versus "matter and energy processing" is, and how rife it is with "cluster concepts" that are ill-defined. Gradually, he proposes ways of thinking about these cluster concepts (for example, not thinking of things as continuum, but rather as clearly delineated by many small and less significant divisions). Part of his thoughts about information processing systems as opposed to matter and energy processing systems is that "the semantic content of symbols used in such cases is only partly determined by internal mechanisms" - in other words, part of the semantic meaning of the symbols used by information processing systems is contained in external things (namely, the environment). Sloman makes an interesting claim as far as the causality between the mental and physical entities, and draws a correlation to virtual machine systems. Essentially, the point he makes is that the mental body (mind) affects the low-level in the same way that a software program affects the bits and voltages of a computer (a software program that may contain a virtual machine, and thus may emulate or simulate what might actually happen rather than making it happen). He makes the statement that "causal completeness at a physical level does not rule out other kinds of causation being real", essentially making the argument that the mental entity is simply a logical abstraction with it's own causes and effects that are reflected in the lower-level only because that is how it is implemented—and because the lower-level is a full implementation, it has it's own full-causation, but this is simply the implementation of the larger logical ideas, structures, and processes (architecture) embodied in the mind (mental entity). Sloman sounds a little grandiose at times, and has a tendency to make rather inflammatory remarks (where they read by the right people), and seems entirely confident that the current problems that are being faced by AI researchers in their quest for human emulation will be surmounted. He makes the claim, at one point, that robots will one day wonder whether humans are conscious. And, they may, although it seems to me that his pet architecture lacks the structures to properly "wonder". He also makes a stab at killing the common "Zombie" argument, but doesn't make any more compelling an argument than any other attempt at the same, and even recognizes this by stating "Of course, those who simply do not wish to believe that humans are information processing systems for theological or ethical reasons will not be convinced by any of this." although his reasons fall a little short, simply labelling conflicting opinions as incomprehensible. Finally, in his conclusion he brings up the example of chess-playing programs, or theorem-provers, and states that they don't enjoy or get bored by doing what they do because they lack the architectural structure for that, even though they may be programmed to give the appearance of such emotions. Oddly, he doesn't take the opportunity to put any more reason behind this than simply the semantic definition of want as an architectural function, and doesn't confront the simple question of whether even such an architecturally designed program is simply programmed to give the appearance of wanting or caring.

Post a comment

(If you haven't left a comment here before, you may need to be approved by the site owner before your comment will appear. Until then, it won't appear on the entry. Thanks for waiting.)

About

This page contains a single entry from the blog posted on January 9, 2003 11:15 PM.

The previous post in this blog was Thinking Politically.

The next post in this blog is Who is Accountable?.

Many more can be found on the main index page or by looking through the archives.

Creative Commons License
This weblog is licensed under a Creative Commons License.
Powered by
Movable Type 3.34