J. Storrs Hall (JoSH)'s Beyond AI provides a good and thorough introduction to the issues surrounding AI. I had expected JoSH to try to explain how to build an AI, and to fail at that, because no one really knows what the necessary breakthroughs will be. I expected that because I've known him long enough to know that he's a smart, ambitious guy who's thoroughly familiar with AI, who doesn't seem to have been drawn into the debates and discussion about "friendly AI". But he surprised me by writing a very readable, very useful book that doesn't say much that is new, but organizes it in a way that lays out the important issues in context and gives a road map for how we can learn to deal with the changes that development of smarter than human AI will bring to our world.
After a brief dip into dreams from antiquity of creating artificial creatures and how they were expected to change the world, JoSH starts the technical history with feedback theory and cybernetics, and shows how those evolved into control theory, information theory, and neural networks. He then shows where his roots are with a section titled "The Golden Age" that talks about work on symbolic AI through the 60s and 70s. This led to what looked like rapid progress and solutions to a number of problems: competence in various microworlds, rudimentary ability to generate understandable language, ability to understand constrained language, and promising representations for abstract knowledge. This is followed by a chapter that shows how the pioneers became disillusioned with their approaches by the end of the 1970s, as they harnessed their tools to solve a wider variety of problems, but discovered that they weren't solving harder problems, or finding any approaches that were leading to general mastery.
Any particular problem area could apparently be analyzed and reduced to a mechanical solution, but that solution didn't seem to help with the next one. JoSH attributes the stumble to the fact that the early approaches relied on programmers explicitly coding specific knowledge about each domain into an architecture organized around a formal model of the domain. This works for a constrained area, but leaves no room for fuzzy boundary cases. People are good at interpreting definitions and instructions loosely and knowing when to do so, but a program that can diagram sentences and summarize a typical daily newspaper would be useless if you wanted to translate the user's manual for a consumer appliance, or generate route instructions for a navigation application.
The next part of the book addresses the nature of mind: what general intelligence is, and what it would take to build something that could understand itself well enough to enhance its own functioning. JoSH draws together evolution (in E. O. Wilson's Sociobiology), with advances in computing and philosophy. Researchers in the 90s were explaining how the mind is made of separate modules that we can analyze in isolation, and whose distinct failures can be seen in identifiable mental disorders. When artificial minds are contemplated from that point of view, the problem of building one to suit appears more tractable, and the idea of recursive self improvement becomes more manageable. In our own brains, it's clear that there are distinct modules with separate responsibilities, that when one part becomes damaged, other brain regions can provide substitutes for the functionality at a reduced efficiency, and that the modules communicate with one another in some flexible way rather than relying on formal, precise interfaces. That makes it easier to believe that an artificial mind, built modularly, could use its understanding to upgrade and improve itself.
One way to resolve the issue that symbolic AI faced, that their representations were only useful in the verbal domain, and not in the physical has been addressed by the modern embodied approach. There has been a lot more focus recently on building robots that interact with the world to build up their model of their environment. This is a start on figuring out how learning works, and one of the discoveries is that agents interacting with the physical world can often get by with much less internal representation that earlier generations expected. The physical world provides a local representation of itself which, if you can interpret it, can answer many of the questions that arise when you need them. This simplifies the learning and doesn't require as much one-on-one training as building a formal model. It also seems to have been particularly robust, where the formal approaches were brittle. These embodied agents have also provided a more situated environment for re-exploring earlier lessons on reasoning and Bayesian inference. If the robot has to be able to adapt to a variety of different locations, then you're better off giving it the ability to become familiar with wherever it ends up than if it can't do anything until someone explains where the doors are and which outlets its supposed to use.
JoSH spends a chapter outlining an approach based on reasoning by analogy with its own history. It reminds me of Jeff Hawkins' description of how the brain works in his book On Intelligence, but he doesn't go into much detail. It's one part of what will be needed, but the situated AI work will provide many more pertinent clues.
The last third of Beyond AI focuses on social consequences. First: what, who, where and, when, then the questions of free will, what morality should apply and whether it will be friendly to us, and finally whether there will be a singularity, and if so, which one. JoSH identifies four approaches that might lead to different WHAT answers: direct synthesis of AI software, emulation of the human brain at the neural or at some higher level, or building a learning machine that grows up to be a full AI. As far as WHO will win the race, JoSH identifies the military, university and industrial labs, and start-ups and the open source community as contenders, without giving any of them an edge. When he addresses WHERE the breakthrough is likely to take place, he tips his hand as to the shape he expects it to take.
Given the international nature of both the scientific community and the Internet, however, [...] The answer is most likely, everywhere.
As to WHEN, his slow projection is for everything except human-level flexibility and creativity by 2025, and 2035 for general human equivalence. With a few key breakthroughs, he thinks that general human equivalence could arrive in the 2020s.
JoSH does an unusually good job of explaining why free will isn't a problem. First of all, I want to point out that he laid the groundwork earlier by talking about how we understand gravity in order to be able to forestall a crucial objection in the middle without requiring a long aside. The whole book seems to have been constructed this way, with explanations early on that help reduce confusions later without seeming out of place when they occur. As he presents it, the problem is that we have a strong intuition that there's some contradiction between the deterministic nature of the universe and our ability to make choices that change the way things will turn out. JoSH points out that in order to make predictions about how our behavior will affect things, we have to have a mental model that includes a deterministic world which we inhabit, but that our model of ourselves has to be one that shows us making choices. We have to think of ourselves as considering alternatives and evaluating them and then making a choice. (When we're sophisticated, the models show that other people are also making choices) Given that the mental model allows us to make decisions, it has to have those two parts. Even if everything is deterministic, the self-model has to consider different possibilities before choosing actions. That part can't feel deterministic if it is to succeed as a model. That's all free will is.
The conclusions reached in Beyond AI about the ultimate shape of the future are remarkable similar to my own. Change will be large, but will arrive gradually, and there won't be any dominating breakthroughs. Many people will develop many different systems that advance the state of the art along a broad front. The groups best able to take advantage of other people's work will be working in the open and sharing their results. In this kind of environment, the best way to exploit your advancement is to bring it into the market place. AIs that emerge in this kind of context will see that cooperation with others and competition to best serve customers and provide value is the best way to get ahead. This kind of morality will serve them, and will lead them to be friendly in the important sense. Just as Adam Smith explained in his Invisible Hand metaphor, they'll help us (their customers) because that's the best way to advance their own interests.