Saturday, December 01, 2007

Andy Clark: Being There

Andy Clark's Being There is a book on intelligence and AI. Clark's main point is that all known intelligence is situated; it relies on context and the local environment to cue behavior, rather than redundantly modeling the environment at significant expense.

The book was recommended to me when I mentioned Greg Egan's Diaspora again as a description of how I imagine an AI might come to self-awareness. I had hoped that the book would provide information on building AIs using situated approaches and some examples of successes, but the focus is on demonstrating that human and animal behavior and cognition rely heavily on context and environment. Clark gives plenty of examples and describes experiments on infant locomotion to show that even crawling and beginning to walk are responses to environment cues.

As an argument that both learning and action in humans and many other creatures often appear only in response to environment cues, the book is reasonably thorough. As an argument that this is the only way that it could be, it falls short.

Most of the discussion about how this applies to AI are in examples of past projects in which someone discovers that eschewing representation, and striving to build something that will perform reactively simplifies the problem immensely. This is fine and useful for projects that want to solve particular problems (build a robot that can dance or mow the lawn, or find coke cans in a cluttered lab).

The argument that self-awareness may also require interaction with the environment seems different to me. Clark's examples all have the form that the environment is its own best representation, and using direct observation to cue performance is more efficient. This is true as far as it goes, but we already know that reasoning creatures have internal models that reflect object persistence, agency, permeability, and many other features of the objects around us that we can't recall by looking around us.

As in Egan's vignette, thinking creatures seem to come to self awareness by playing with the world through our innate sensors and effectors, and discovering that there are parts of the world that are out of our control and parts that are under our control. Still later we figure out that some of the parts that we control are ourselves and other parts merely ours temporarily. Later we realize that some of the parts we don't control are other agents with their own intentions, and we can affect them only indirectly. In Egan's version, the final step in coming to awareness is realizing that we and the other agents are the same kind of thing.

So while being situated is crucial in coming to awareness, the interaction between internal models and an external world that we can affect but that we can't control unilaterally seems crucial. Clark's focus is only on the ways in which relying on the environment simplifies the problems of an agent interacting with the world. I think this will help people produce useful tools, but without internal models, I expect them to continue to fall short of awareness or reasoning beyond the immediate situation.

No comments: