Saturday, October 01, 2005

Eric Baum: What is Thought?

There was some substance to Eric Baum's What is Thought?, but it didn't add much to my understanding of the title question. Baum's has been working in AI, writing software systems, some of them reasonably successful. He argues that intelligence can be recognized in any system that acts effectively based on a compact representation of the world. The evolution of more successful creatures was a continuing process of discovery of better representations of the nature of the world. He takes it as a given that the mind is a program. (Which is part of why he didn't add much to my understanding of what thought consists of.)

His main argument seems to be that thought and the mind are the result of evolution producing better and better representations. This is neither surprising nor new. He makes little attempt to explain how the mind works beyond the fact that it uses analogies. Apparently he thinks this is fundamental and important. His focus is on economy of representation, and he apparently believes that reusing representations is the source of power. I'm not sure why this doesn't seem more powerful to me, since I often say that the source of the power of OOP is in the re-use that polymorphism gives you. I think the difference is that he didn't show any mechanisms by which the mind could be re-using modules. He shows how modular tools can be re-used in other architectures, but he doesn't say what the architecture of the mind is.

The valuable contribution of the book is in its description of the Hayek system that Baum developed with colleagues at NEC Research. This was a general learning system based on evolution and agoric feedback. The design of this system made possible the evolution of separate agents that could work together to reach a goal. I haven't heard of other systems that were successful with this combination of goals, though there have been other attempts. The aspects of the approach that seem crucial to the success were that one agent at a time was allowed to make changes, and the agent was chosen at any point by an auction among the agents. The agent who wins an auction pays the winning bid to the previous agent, and afterward gets paid the price bid by the next winner. The agents are charged rent when they're not running, and for cpu when they are running. Randomly mutated copies of the more successful agents are added to the population over time.

It's not obvious how you get this economy started, but once it's going, each agent competes to raise the value of the current state of the world so its successors will bid more than the previous state was worth. As long as there's an agent who can produce a final state of the world that is valuable according to some external metric, everyone can earn a living wage along the way. The problems he applied this ecology to included a simple blocks world problem, Rubik's Cube, and the traveling salesman problem.

According to someone familiar with the literature, the only other systems that tried to use markets in a similar way made the mistake of choosing as the next active agent by a lottery proportional to agents' bids rather than simply choosing the highest bidder. This reduces the link between agents that can cooperatively solve the problem, enabling less effective agents to intervene and destroy any order that has been built up. The clean approach in Hayek seems much more likely to work for long chains of agents, and to give evolution a better opportunity to discover shortcuts and more efficient approaches.

If you have a system that works, but not well, modified agents can attempt to shortcut the working path. If they reliably do better, the bypassed agents can wither away. If the new agent is unreliable, both paths can stick around long enough to compete, and possibly combine with an intermediate version that has a better model of when the different plans are applicable.

His discussion of the evolution of language in chapter 13 was particularly unsatisfying. He pointed out that language can't start growing beyond the level of innate language (which seems to exist in many creatures, from honeybees to vervets) until you get both the ability to learn to produce new words and the ability to learn new words. He ends up not explaining how evolution surpassed this hurdle; instead treating it as the reason why it was so rare for evolution to produce the ability.

Filed in:

No comments: