Monday, November 24, 2014

Confusion about AI

I like Brockman's Edge.org. I think of it as smart people talking to smart people, and I usually find the discussions very interesting. But I was unable to read the recent conversation on the Myth of AI, started by Jaron Lanier, and mostly focussed on Bostrom's Superintelligence. I expect Bostrom's work to be very important, but I haven't found time to read it yet. Superintelligence talks about the likely emergence of super-human intelligences, and what there is to look forward to, as well as what we should worry about. I consider these to be very important issues, though I don't think they're going to make a huge difference in the next 10-20 years. But further out it is indeed going to be crucial that we spend time planning out how to make these intelligences not act in a way that is inimical to our interests. It's not that there's any reason to expect them to be out to get us, it's just that they'll have goals, and if we don't make the right moves ahead of time, we'll be in the way of their achieving their goals.
Anyway, starting out with Lanier, the discussion seemed ill-informed. The opening quote has him saying "The idea that computers are people has a long and storied history." This conflates so many threads that it's hard to know where to start. It's like trying to have a discussion about free speech with someone whose opening point is a complaint about the Supreme Court having decided that "companies are people". As far as I can tell, the Court decided that corporations are one of the ways that people act in concert, and that they don't lose their free speech rights when they use that kind of organizational structure to speak publicly. The fact that this decision applies just as much to giant mega-corporations and to unions as to the two-person public outreach institute that was the actual subject of the case at issue is more due to the Court's belief in consistency.
The point of AI isn't that "computers are people", it's that thinking and acting can be reduced to computational processes (it all comes down to atoms and meat, after all) and so there's no reason to believe that we won't eventually be able to build machines out of silicon that do the same thing, and aren't subject to the same constraints as apply to biological mechanism made out of Carbon.
I was very happy to read Luke Muehlhauser's review (hat tip to Yvain). Luke agrees that the discussants at Edge are confused, and had the patience to analyze some of the misconceptions, and point back to the actual subjects of disagreement.