Saturday, December 22, 2007

Adam Roberts: Gradisil

In Adam Roberts's Gradisil the pioneers who settle near space around the Earth fight for their political freedom, yet somehow the viewpoint characters are unsympathetic enough that it's hard to root for them. The story follows several generations of the family that leads the fight; their cause is worthy, but they're so dysfunctional and spend so much of their time pushing at cross purposes to the higher goal that I was willing to give up on them several times.

The SF is solid; the idea is that the magnetic fields around the planet are real and coherent enough that engines can be built cheaply to exploit them and climb into space. This puts near Earth orbits within the grasp of individuals. It's a new frontier, away from the control of any particular government, so the people who move there are loners, escapists, and fringe cases of many kinds. There aren't many commercial opportunities to exploit at first, and it's hard to track the ships and stations, so the society that emerges is extremely loose-knit for a long time. There is some sense of community and neighborliness, but people who want to be left alone are left alone.

Eventually the governments figure out that having so many misfits flying overhead in uncontrolled space is a concern, and they decide to pacify and take over. The uplanders resist in a passive way that exploits their strengths, or at least relies on their smallest weaknesses. I loved the depiction of a war that is controlled by the lawyers: all the generals understand that winning on the battlefield but losing in the courtroom is not winning at all, so strategy and tactics have to be approved by the lawyers before any warlike actions can be taken.

The actual battle scenes, when they finally occur are plausible. A new environment and new technology lead to new tactics. The invaders aren't as familiar with the technology or the environment, so their expectations can be exploited by the native defenders.

The bottom line is that the Science Fiction is plausible and provides setting rather than being the focal point, the good guys are fighting for something important, but the viewpoint characters, even as they mature and are replaced by their progeny, are hard to sympathize with. The action is interesting, but it's a long struggle to get through the whole thing.

As a nominee for the Prometheus award, it lacks any explicit or implicit invocation of themes of freedom. This is a glaring weakness, since the opportunities were rampant. The SF was reasonably well developed, and that has to count in the book's favor, but I didn't enjoy the character development. I don't see Gradisil as a strong contender.

Wednesday, December 19, 2007

Ken MacLeod: The Execution Channel

Ken MacLeod's The Execution Channel is a near-future story from a recently diverged history. The science fiction is minimal, but sufficient to keep the book off the main-stream shelves. It's an adrenaline-pumping adventure/chase story that takes a distinct political stance against the recent growth of the security state.

The story follows government operatives and members of the counterculture in the aftermath of a series of apparent terror incidents in Britain. The first incident might be a nuclear explosion at a military base, but one of the main characters (participating in a peacenik stakeout at the base) saw and photographed delivery of a strange device before the explosion. She has a brother in the military who blogs about daily life, and her father works in intelligence. We follow them as various international government agencies track and connect them, and then decide what actions to take to contain them.

This novel has an unusual feel for MacLeod, since it is mostly conventional fiction in a near present setting. The divergent history is only present to argue that national politics doesn't make as big a difference to how events unfold as people think. I thought he pulled this part off well, making a surprising argument with only tiny effects on the story. The other SF aspect of the story is the effect behind the explosions; since a handful of the characters spend their waking hours spreading disinformation, the hints about the resolution are easy to discount on first reading.

Mostly MacLeod has found a good readable adventure story to wrap around a commentary on ubiquitous surveillance, government malfeasance, torture, and current events. The story is quite readable, and the themes will resonate for libertarians and other anti-authoritarians and privacy zealots. I think it's a good candidate to win the Prometheus award this year.

Peter McCluskey had a different reaction to the book.

Wednesday, December 05, 2007

Keith Stanovich: The Robot's Rebellion

Keith Stanovich, in his book The Robot's Rebellion , takes the stance that we are vehicles driven by our genes and memes, and tries to give us the tools and a place to stand to figure out what matters to us. (The metaphor is that we are robots driven by these influences, and we should want to regain control for ourselves.) Since the only tools we can use to reason with and all of our values are held by and in the control of our genes and memes, this is a daunting task.

Without explicitly recognizing that he's discussing epistemology, Stanovich does a commendable job of presenting a summary of the current research on standard biases in human reasoning. Once you understand the predilections of the tools you rely on, you can try to compensate for them, and start to figure out what you want. Stanovich's proposal is fundamentally consonant with Pancritical Rationalism. (Which is the source of the name of this blog.) The metaphor he uses is that of repairing a ship plank-by-plank while at sea. Regardless of how much or how little confidence you have in the current framework, you have to stand somewhere in order to start the process of examining what's there and replacing parts you don't have confidence in.

Much of the book repeats stories and results that have been widely reported in such popular books as Stumbling on Happiness, Adapting Minds, The Mating Mind, and The Blank Slate, but this material is easy to skim. Stanovich spends a lot of ink explaining that some of our analysis is done by mechanisms that are built-in and harder to introspect on or to change. This is relevant later when he talks about reconciling different desires.

One example of meta-rationality that Stanovich presents well is the point that introspection on your values may lead you to find apparent conflicts: you enjoy doing something, but wish you enjoyed it less, or you don't enjoy it and you wish you did. He provides a notation for talking about this kind of situation which I found kind of clumsy, but the idea of thinking about such things and having a language for analyzing them is valuable. He explains why you might have these conflicts, and why it is valuable to reason about the conflict from a viewpoint that is meta to both. Once you decide which desire is more important, he also shows that it's possible to use that understanding to bring your values into alignment, even when it's the more basic, inbuilt drive that you want to change. (I blogged last year about goals and meta-goals as ends and means).

Stanovich only spends about 20 pages on identifying and defusing opinions and desires that serve to protect your memes from your introspection, but these sections are his most valuable contribution. The memes that set up a self-reinforcing structure that forbid evaluation of the meme-complexes themselves are the ones that most deserve concentrated attention. I think he explains this point well enough that people in the grip of religious (or other defensive) ideas would be able to see how the prohibition on introspection only serves the meme-cluster, which might help them get over the hurdle and start down a reflective epistemological path, and figure out what their own goals are.

Unfortunately, Stanovich ends the book by trying to show that markets subvert the goal of reconciling our desires and meta-desires. His argument is that markets only pay attention to money, and so the people with the most money get what they want and everyone else gets nothing. What this misses is that of all the actually possible social institutions, markets are unique in not giving a few people complete control of the economy. In a market, some people have more money and therefore get to command more resources, but anyone who has some money can still use it to buy some of what they want. The great failing of socialism is that only the politicians get a voice. But this is a minor failing of the book. On the whole, it's nice to see a book that learns from Evolutionary Psychology, and uses those ideas to help people learn how to think about what they want.

Saturday, December 01, 2007

Andy Clark: Being There

Andy Clark's Being There is a book on intelligence and AI. Clark's main point is that all known intelligence is situated; it relies on context and the local environment to cue behavior, rather than redundantly modeling the environment at significant expense.

The book was recommended to me when I mentioned Greg Egan's Diaspora again as a description of how I imagine an AI might come to self-awareness. I had hoped that the book would provide information on building AIs using situated approaches and some examples of successes, but the focus is on demonstrating that human and animal behavior and cognition rely heavily on context and environment. Clark gives plenty of examples and describes experiments on infant locomotion to show that even crawling and beginning to walk are responses to environment cues.

As an argument that both learning and action in humans and many other creatures often appear only in response to environment cues, the book is reasonably thorough. As an argument that this is the only way that it could be, it falls short.

Most of the discussion about how this applies to AI are in examples of past projects in which someone discovers that eschewing representation, and striving to build something that will perform reactively simplifies the problem immensely. This is fine and useful for projects that want to solve particular problems (build a robot that can dance or mow the lawn, or find coke cans in a cluttered lab).

The argument that self-awareness may also require interaction with the environment seems different to me. Clark's examples all have the form that the environment is its own best representation, and using direct observation to cue performance is more efficient. This is true as far as it goes, but we already know that reasoning creatures have internal models that reflect object persistence, agency, permeability, and many other features of the objects around us that we can't recall by looking around us.

As in Egan's vignette, thinking creatures seem to come to self awareness by playing with the world through our innate sensors and effectors, and discovering that there are parts of the world that are out of our control and parts that are under our control. Still later we figure out that some of the parts that we control are ourselves and other parts merely ours temporarily. Later we realize that some of the parts we don't control are other agents with their own intentions, and we can affect them only indirectly. In Egan's version, the final step in coming to awareness is realizing that we and the other agents are the same kind of thing.

So while being situated is crucial in coming to awareness, the interaction between internal models and an external world that we can affect but that we can't control unilaterally seems crucial. Clark's focus is only on the ways in which relying on the environment simplifies the problems of an agent interacting with the world. I think this will help people produce useful tools, but without internal models, I expect them to continue to fall short of awareness or reasoning beyond the immediate situation.

Friday, November 23, 2007

Progress on Augmenting Intelligence

I just read a new paper by Jeff Shrager (currently at CommerceNet, which funded my work on Zocalo in 2005 and 2006; thanks to Tom Malone of MIT's Sloan School for the pointer to the paper) on a system designed to support group reasoning processes based on Bayesian principles. Having just read Eliezer Yudkowsky's post on Artificial Addition, I was sensitive to the notion that some older attempts at AI had failed to enforce any semantic relationship between the nodes that are intended to represent particular concepts.

The system described by Shrager et. al. uses a language understanding system (presumably based on the same tools as PowerSet relies on; I've been admitted to PowerSet's trials, but can't access the activation code, so I haven't tried their demo yet). This means that for the first time to my awareness, the system can help by finding semantic connections between assertions. This isn't trying to be AI, but computer support for collaborative reasoning is just as much on the path to enhancing our power to change the world as is AI.

Friday, October 12, 2007

Lim Lesczynski: The Walton Street Tycoons

Jim Lesczynski's The Walton Street Tycoons was suggested for the Prometheus award, but I don't think it will be a serious candidate. That's not because it's not libertarian enough, though, it's because it's completely not science fiction. Even 47 had more sf than this. The story tells of the spontaneous creation of a thriving market among a bunch of seventh graders in a small town that has fallen on tough times. All the kids get into it, and their problems with competition and government interference are quite entertaining. But the main character doesn't think like a 12 year-old, he thinks like an experienced, mature 27-year old. His sex drive, understanding of women, ability to organize, and expectations of his parents and teachers are much more suited to someone who has already been in the working world for 10 years than to someone who is discovering it all for the first time.

The story and the characters are unrealistic enough that I wouldn't recommend it to non-libertarians and the aspersions cast at school, liberals, and government officials would make it hard for any of them to get through. Especially since the aspersions come from the mouths of 12 and 13 year olds in contrived and implausible situations. For libertarians, the situations are merely mild exaggerations of the kind of shenanigans we expect from government agents, but to less sympathetic eyes, they'll look contrived and out of the ordinary. This is not the best way to convince someone that government doesn't work the way they think.

For libertarians, this is a mild diversion. As outreach, it's polemic and unbelievable.

Sunday, September 30, 2007

Inattentive Legislators

The October issue of Reason Magazine closes with a short piece (apparently not yet on-line) by Radley Balko on Dan Frazier, a Flagstaff activist who sells t-shirts superimposing the names of soldiers killed in Iraq over the phrases "Bush Lied" and "They Died". The article focuses on state legislatures in Arizona, Oklahoma and Louisiana that have passed bills (and attempts in Congress to do the same) prohibiting anyone from selling goods displaying the names of dead soldiers. Several of the legislators who voted in favor later acknowledged that the bills were probably unconstitutional.

Their excuses ranged from "A senior moment", to "failed to read the final version", or "hadn't read any version of the bill, despite voting for it twice." I'm outraged that they think these qualify as excuses. It seems to me that anyone who ever excuses a vote in a legislative body in this way should immediately be removed. If someone has "a senior moment" while driving, their license should be taken away. Isn't a congressional or a legislative seat a more dangerous weapon in the hands of an inattentive legislator?

Tuesday, September 25, 2007

Greg Mortenson: Three Cups of Tea

Greg Mortenson's Three Cups of Tea (co-written with David Oliver Relin) tells how Mortenson fell in love with Pakistan after a failed mountain climbing expedition and decided to help the people who rescued him by building a school for their children. After the effort and investment required to build the first, he continued building. According to Wikipedia and the website of the institute he helped found, they have built more than 60 schools in Pakistan and Afghanistan.

The book follows his adventures from the fateful climb through fundraising, kidnappings, and fatwas from imams incensed at westerners educating Muslim children.

Mortenson is quite a hero. He does a great job of helping the reader to see the Pakistanis as people, and shows how far a little education can go. The people he deals with are willing to give up a lot in order to make it possible for their children to get an education. Mortenson himself makes enormous sacrifices to continue the work, even when many factions try to prevent the education of their neighbors' girls.

The book is well enough written to have been a New York Times best seller. It's well worth the read.

Monday, September 10, 2007

Market Makers for Multi Outcome Markets

Previous articles in this series have discussed market makers and how they differ from book order markets, how to improve Liquidity in multi-Outcome claims, and how to integrate a Market Maker into Book order systems. But none of those talked in any detail about how a multi-outcome market maker coordinates prices and probabilities. Those details turn out to be important for an upcoming article on Combinatorial Markets, so I'll go through them carefully here.

Researchers use scoring rules as a laboratory tool to convince people to reveal their true expectations about some set of outcomes. Participants are asked to give estimates of the likelihood for a set of outcomes, their scores are some function of the value they gave for the actual outcome. Scoring Rules are called "Proper" if they are designed so the participant's best strategy is to honestly reveal the probabilities that seem most likely. The Logarithmic Scoring Rule (one of the Proper rules) provides a reward that equals the logarithm of whichever estimate turns out to correspond to the actual value. Since the total of all the estimates must be 1, the participant can only increase some probabilities by decreasing others.

Robin Hanson described how an Automated Market Maker (AMM) that adjusts its prices based on a scoring rule can support unlimited liquidity in a prediction market. If each successive participant in the market pays the difference between the payoff for her probability estimate and that due to the previous participant, the AMM effectively only pays the final participant. If the AMM's scoring rule is logarithmic, participants who only update some probabilities don't effect the relative probabilities of others they haven't modified. (This last effect is only valuable for Combinatorial Markets, which I'll talk about in a later post.)

The change in the user's payoff is log(newP) - log(oldP) (or equivalently log(newP/oldP)) for each state. For a binary question, the possible gain will be log(newP/oldP), and the cost will be log((1-oldP) / (1-newP)). For the rest of this article, I'll use gain and cost rather than the log(...) expressions, since there are only these two, and I'll be using them a lot.

In multi-outcome markets, the most common approach is to let the user specify a single outcome to be increased or decreased, and to adjust all the other outcomes equally, but this isn't the only possibility. This design choice has the useful property that the probabilities of other outcomes will be unchanged relative to one another. Since the other outcomes are treated uniformly, they can be lumped together, which results in the same arithmetic as a binary market. Since those other cases sum to 1-P, the price is cost. It is also reasonable to allow the user to specify either a complete set of probabilities, or particular cases to increase and decrease and how much to change them. Whatever the case, the LMSR adjusts the reward for each outcome to be log(newPi/oldPi). I'll describe more possibilities in this vein when I cover the Combinatorial Market.

I hope you found all this interesting in an intellectual sort of way, but you may have noticed that this description isn't applicable to markets in which the traders hold cash and securities. The whole thing is couched in terms of participants who will receive a variable payoff, but they don't pay for the assets, they merely rearrange their predictions in order to improve their reward.

In order to turn this into an AMM that accepts cash for conditional securities, we have to pay careful attention to the effects of the MSR on people's wealth. The effects are easiest to describe in the binary case, and every other case is directly analogous, so I'll start there. In a binary market, the participant raises one probability estimate (call it A) from oldP to newP and lowers the probability of the opposite outcome (not A) from 1-oldP to 1-newP. If the trader had no prior investment in this market, the reward will increase by gain.

In order to reproduce that effect in cash and securities, the AMM charges cost in exchange for gain + loss in conditional securities. Why does the trader get securities equal to the cost plus the potential gain? The effect of this is that if A occurs, the participant has paid cost, and received gain + cost, for a net increase of gain over the original position. If A is judged false, the participant has paid cost with no return, which is the effect we hoped to match.

When an AMM supports a multi-outcome market using the approach I described above, one outcome is singled out to increase (or decrease), while all other outcomes move a uniform distance in the opposite direction. If the single outcome is increasing, the exchange is trivial to describe: we charge the trader cost for gain + cost in securities. The effect looks just like the binary case. The user has spent some money and owns a security that will pay off in a situation the trader thought was more likely than its price indicated.

If the trader singles out one outcome to sell (and thus reduce its probability), the difference among the alternatives I described in the first article in this series on Basic Prediction Markets Formats becomes evident. The trader is betting against something, and the market can represent this using short selling, complementary assets, or baskets of goods. The market might allow short selling (like InTrade), a complementary asset (like NewsFutures and Foresight Exchange), or a basket of securities representing all the other outcomes (like IEM). Since there are distinctly different points of view on this question, different markets will make different choices.

In order to support the short sales model, the trader needs to receive the payment first along with a conditional liability. In our model, the trader would receive gain in cash immediately, and securities that required repayment of gain + cost if the outcome (which the trader bet against) occurs. The platform would presumably require the trader to hold reserves to ensure the repayment.

With baskets of goods, the trader would get the appropriate number of shares of each of the other outcomes. The charge would be cost, and that would purchase gain + cost of conditional assets in all other outcomes.

The complementary assets model would charge cost in currency, and provide gain + cost of an asset that paid off if the identified outcome didn't occur. The complicated part of this representation is that traders can hold both positive and negative assets. In a 4 outcome market, a trader holding 3 units of A and 2 units of B who sold 4 units of C could be shown equivalent portfolios of either A: 3, B: 2, C: -4 or A: 7, B: 6, D: 4. I think either choice is defensible. The first resembles the transactions the user has made, and so is probably more recognizable; the second provides a more consistent view of possible outcomes. (And looks the same as baskets.) If both positive and negative numbers are shown, the trader has to realize that the negative holdings pay off in all other cases. On the other hand, displaying a portfolio in a 7-outcome market as A: 3, B: 3, C: 3, E: 5, F: 3, G: 3 doesn't seem as clear as D: -3, E: 2.

I doubt this detail will be of much interest to most users of Prediction Markets. Luckily for them, the trade-off the logarithmic rule makes between cost and reward just happens to produce prices that match probabilities. But if you are implementing Hanson's LMSR, you should understand the alternatives well enough to verify that your market maker correctly implements the design.Zocalo Prediction Markets support binary and multi-outcome markets with a Market Maker based on the Logarithmic Market Scoring Rule. The design takes advantage of the parallels between the different markets by only implementing the logarithmic rule in one place.

This article is cross-posted to Midas Oracle.
(There's a good discussion on that site.)

Other Articles in this series

Friday, September 07, 2007

Daniel Gilbert: Stumbling on Happiness

Dan Gilbert's Stumbling on Happiness explains in great detail how most of us are confused about what will make us happy, and therefore (from our own perspective) we systematically make foolish choices. The book is written at a very approachable level, and has plenty of humorous asides that make it fun and easy to read. But if you're looking for a solution to the problems Gilbert points out, Gilbert himself doesn't hold out much hope. The ways our memory and imagination trick us are pervasive and hard to defeat. In a few ways, he points out ways to at least track which direction the failures tend to lead so you can try to overcompensate, but your deeper instincts will constantly be telling you that this time your hunches are right.

There were several examples throughout the book where I was able to say "I don't make that particular mistake." But I suspect that everyone can find 10% or maybe 20% of the examples that don't apply to them. But if each of us still has 80% of the foibles he describes (and presumably many more that weren't entertaining enough to get into print) that's still a lot of irrationality to go around. One way that I mostly do better than his prototypical reader is that I'm a self-aware optimist and happy person. I'm pretty confident that even if things don't go my way that I'll generally be happy. It's still hard to believe that I'd be as happy if I were blind or lame (and so I go to some trouble to avoid situations that make those kind of permanent damage too likely), but for all the other minor misfortunes, I'm usually willing to take chances, in confidence that I'll be happy enough even if things don't work out.

Gilbert's most common projection is that people expect to be unhappy if their team doesn't win, or they don't get the promotion. If you can make projections based on looking at the people around you, rather than trying to imagine how you'd feel, you'll do better. When you imagine, you focus on the causes of the scenario you're evaluating. If you look around for people who have suffered the fate you're considering, you'd discover, for example that half the teams lost last week, and there are few fans (whatever sport) whose morale level swings for more than an hour based on the latest results from their team.

This is a more useful book for people trying to understand the psychology of making choices than for people trying to learn how to be happy. People who are already happy most of the time will learn (if they read carefully) that it doesn't much matter what they do or choose; they'll likely be happy in any case. Gilbert's story doesn't have much to say about those who aren't generally happy, which is probably the biggest weakness of the book.

Saturday, September 01, 2007

K. W. Jeter: Farewell Horizontal

K. W. Jeter's Farewell Horizontal is a reasonably well-told straightforward hard SF story. The story follows a single character, Ny Axxter, as he wanders around the outside of a free-floating cylindrical habitat. He's a free-lance graffex artist, barely scraping by on the last of his savings, and the very occasional odd job. He recently abandoned life in the "horizontal" internal spaces, and has yet to find a stable role outside the cylinder, where gravity constantly threatens to pull everyone and everything down the wall.

The setting and the conflicts were moderately interesting, but Ny was a little too clueless for my taste. There were several times that Axxter blithely walked into situations that were clearly set-ups of one kind or another. Sometimes he did so against solid advice from people he'd recently met, and other times in the face of clear warning signs.

Even the resolution was a little too much of a set-up for me. Axxter is in sole possession of evidence that the universal information system that everyone trusts and relies on has been subverted, and if he passes it on to the right faction, maybe they'll do something about it, and reward him with a job! Nope, they're in it, too. But Axxter lucks out; it turns out that his private channel to the faction's leaders was leaky, and everyone in and on the habitat got the message, and decided that enough was enough.

Wednesday, August 22, 2007

Sunny Auyang, How is Quantum Field Theory Possible?

Sunny Auyang's How is Quantum Field Theory Possible? is the densest and most difficult book I have actually read all the way through. There are sections that seem brilliantly written, and sections that seem absolutely opaque. All together, I didn't learn as much as I had hoped, but I did come to understand a few points about the way QFT suggests we view the world.

"Until now almost all philosophical investigation of quantum theories have either taken the concept of objectivity for granted or prescribed it as some external criterion, according to which the theories are judged. The judgments often deny the objectivity or even the possibility of microscopic knowledge. I adopt the opposite approach. I start with the premise that quantum field theory conveys knowledge of the microscopic world and regard the general meaning of objects as a question whose answer lies within the theory. This work asks quantum field theory to demonstrate its own objectivity by extracting and articulating the general concept of objects it embodies. We try to learn from it, not only the specifics of elementary particles, but also the general nature of the world and our status in it. What general conditions hold for us and the world we are in so that objects, classical and quantum, which are knowable through observations and experiments, constitute reality?"

A large proportion of the book is dedicated to explaining Kant's Categorical Framework, which I'm willing to summarize by saying that everything in the universe consists of objects and their properties, and relations between the objects and between the properties. Many philosophers of QFT have presented a vision of quantum reality in which the quantum objects don't have definite properties except at the moment of measurement. Auyang shows that this isn't the only possible interpretation. Auyang's major goal in writing this book may have been to rescue Kant's Categorical Framework.

"The world described by quantum theories is remote to sensory experiences and is different from the familiar classical world. There are scientific puzzles, such as what happens during the process of measurement. Quantum theories disallow certain questions that we habitually ask about physical things, for example, the moment when a radioactive atom decays. [...] The working interpretation of quantum theories, which physicists use in practice, invokes the concept of observed results as distinct from physical states. [...] These factors prompt many interpreters to adopt the phenomenalist position asserting that quantum objects have no definite property [...] the observer creates what he observes."

Auyang wants readers to see how the world-view she refers to as "Kant's Categorical Framework" applies not just to the classical world, but also to quantum reality. This contrasts with the view presented by many prominent interpreters of QFT, and understood in the popular lingo as "it tells us that there isn't any truth of the matter with respect to quantum objects." Auyang shows how the quantum world can be described in the same objective way as the real world. QFT doesn't undermine our belief that everything is built up out of objects, properties and relations.

"This work presents a parallel analysis of the conceptual structures of quantum field theory and our everyday thinking. I do not try to describe quantum phenomena in substantive classical terms or vice versa. I try to articulate the categorical framework of objective knowledge, of which quantum field theory is one instance and common sense another. The categorical framework enables us to match logically the formal structures of quantum theories and everyday thinking element by element. The structural fit illuminates their philosophical significance."

The main thing I learned about the shape of the universe is that it's improper to think of space-time as fundamental and of objects (and their properties and relations) as inhabiting a pre-existing space-time continuum. Instead, according to QFT, particles and space-time itself are emergent phenomena that arise from the interplay between fermions (matter fields) and bosons (interaction fields). I've seen Feynman diagrams many times before, but I don't remember seeing a statement as clear as Auyang's "In Feynman diagrams, a matter field is [...] represented by a straight line and an interaction field by a wavy line." Section 8 contains this gem along with a table of the fermions and bosons and how they combine to form the four basic interactions (gravity, electromagnetism, strong and weak forces). Auyang makes it clear that at the quantum level, the fundamental elements are the fermions and bosons, that bosons mediate all interactions between fermions, and that bosons and fermions are emergent phenomena of something more fundamental.

At the classical level, we're used to objects interacting directly: billiard balls collide and reflect in predictable directions. At the quantum level, bosons meditate the interactions between fermions. Photons, which are colloquially described as half-particles and half-wave aren't particles at all in this sense, they're the entities that mediate the electromagnetic force. Classical photons are just as much an emergent phenomena as electrons are, but they're part of the constellation of interaction fields while electrons are a kind of matter field. Auyang says "space-like separated transformations do not affect each other, for causality demands that physical effects propagate from point to point with finite velocity." That includes everything from billiard balls caroming to planets tugging on every body in the universe. (Though Auyang admits that "the gravitational interaction is not well understood.")

She goes on to explain how to think of the fields as exhausting space time:

Absolute positions are identities of events. There is no identity without an event. [...] Fields, which are spatio-temporally structured matter, exhaust the universe. It is not that the matter fills space-time; rather, the spatio-temporal structure spans the physical universe.

Auyang also attempts to explain (in § 16) how to think about QFT without invoking the consciousness of the observer. Her explanation seems plausible to me, but I'm not sure I ever understood why the standard model required a special status for the observer. Her explanation seems to boil down to arguing that the proponents of the Copenhagen interpretation are confused by the claim that quantum mechanics is a complete and final theory. (Apparently the term originates in the 1935 EPR paper.) According to Auyang, later proponents pushed the definition too far, insisting that observation required an observer. Auyang says that the observer is implicit, and to the extent that QM provides a complete theory, it's only a complete theory about quantum objects, so the observation has to be at the quantum, not classical level, in order to be part of the theory. I don't know whether theoretical physicists will accept her argument, but I think that's the interesting test of her argument.

While she is talking about the general concept of objects (§ 15), Auyang makes an observation that seems relevant beyond her intention. She says "When we look around, we see objects, books and pens. The presence of objects is immediate, we do not infer them from sense data." Her point is that we learn that our senses can be in error, and that objects we perceive may not actually be present (due to optical illusions, hallucinations, etc.). But the point is more fundamental: the process of coming to awareness is a process in which our sensory apparatus learns how to lump perceptions together based on in-built notions of permanence, cohesiveness, and various kinds of conservation rules. By the time we're doing anything that can be called thinking, we no longer have (if we ever did) access to raw perceptions; we think and perceive in terms of lumpy objects. Artists have to work hard to learn to see the colors, textures, and contours that make up an image. Greg Egan illustrated this brilliantly in a passage in his novel Diaspora. Jeff Hawkins' On Intelligence relies on the same effect at a different level.

Thursday, August 09, 2007

Sheri S. Tepper: The Fresco

Sheri Tepper's The Fresco wasn't very satisfying. The plot is simple: the aliens arrive, promise to solve many of our problems, and cause a few minor disasters before resolving most social issues and welcoming us to apply for membership in the galactic brotherhood. The science is inconsistent, and other than the protagonist, the characters are fairly one-dimensional. Tepper makes a few attempts to show that some of the abilities of the ETs could be explained by nanotech or other sufficiently advanced technology, but she feels free to introduce new tools and powers whenever it suits her fancy. The solutions to social problems (which include unexplained psychological adjustments) are mostly in the too-complex-to-explain-to-the-reader category.

The conflict and action are split between the disasters on Earth caused by the conflicts among ETs about their rules of engagement with us and an exploration of the psychology of the aliens. The local disasters are side-show; the real action concerns the eponymous fresco. The society of the race of ETs that makes first contact is based on wisdom drawn from a set of ancient murals produced by a distant progenitor. The murals are holy enough that they haven't been touched—or cleaned—in many generations. The standard interpretation of their meaning comes from a revered scribe several generations removed from the drawings' creation, and there are clear indications that the murals were already illegible at that point.

The analogy to religions based on frozen interpretations of an ancient text are obvious, though Tepper doesn't dwell on them. In this case, we're merely shown that the ETs have lost any sense of the original meaning, that their society is unstable if they suddenly have to figure things out for themselves, and that an external agent (benevolent humans) can fix everything by ensuring that the accepted interpretation is a benign update of the interpretation everyone is used to.

Unsatisfying morality, unsatisfying epistemology, unsatisfying story-telling.

Sunday, August 05, 2007

Tracy Kidder, House

A friend (thanks Hal) loaned me his copy of Tracy Kidder's House when I mentioned that we were planning a remodel. I think he intended me to take it as a cautionary tale of the hazards of underspecifying the design before beginning work, but I read it more for the interesting story of interpersonal (management) struggles and the details of design and construction.

Many years ago, I thoroughly enjoyed Kidder's The Soul of a New Machine. Kidder is exceptionally good at showing what is going on when a group works together to build something that is bigger than any of them can manage on their own. There are always political struggles, but it is heartening to see everyone striving to overcome strained interpersonal relations to ensure that the house turns out the best it can.

House is about a new company formed by a small group of experienced carpenters (but inexperienced businessmen) building a house in Massachusetts for a young lawyer under the direction of an architect who is just starting his practice. The house won design awards for the architect, and the lawyer (according to Kidder's story) was happy with the house, but the builders didn't make much money for their effort. The story describes the process of building the house in a fair amount of detail, but the focus is always on negotiations about who will pay for changes, who should have foreseen their necessity, and what was agreed to up front. The builders would have been much happier with the outcome if they had understood better how to write an estimate that left them room for profit. As it was, they were constantly squeezed when the lawyer pushed back on the price of materials and asked for trade-offs to his advantage.

Most of the blame for the particular problems goes to the agreement to proceed with construction before the design was complete. This meant the builders couldn't proof the totality of the design, to ensure, for instance, that there was room for the landing of the grand staircase where the architect was envisioning it. Another acrimonious conflict involved the architect's grand vision for how the greek revival decorations would be built, but this mostly impacted the financial accounting and people's attitudes toward one another without affecting the finished house significantly.

Of course, my reading of all this is heavily influenced by the fact that my father is an architect, that I helped (a little; after the first I was mostly off at college during the construction) him build three different houses, and that I'm a software developer and development manager. I often say that one of the most important lessons for software developers to learn is how to get requirements from a customer. As Extreme Programming points out, the customer isn't in a position to actually say what she wants at the beginning of a project; the designer has to evoke the needs, and show how they might be filled in order to allow the customer to fill in the details that the designer isn't familiar with. XP teaches the developer to make the design visible as early as possible so the customer can react to the parts that work and those that aren't right. When building or remodeling, many parts are harder to change once in place, but there are still opportunities to improve and solidify the design as a project proceeds.

In our own remodel, we've been trying to explain to the contractor that we understand that he expects us to change our minds; we've left room in our budget to make changes as we see how things turn out. He doesn't seem to understand that from a different industry we could understand the kind of flexibility he has to leave himself. Maybe he deals with changes as part of an attitude of adaptiveness, rather than as an articulated understanding of how it affects planning and budgeting. He shows so much flexibility that it's hard at times to pin him down on anything. We did finally get a rough schedule of construction so we can coordinate on the things we have to specify and buy ourselves. (countertops, flooring, new stove, tub, sinks, finish details, etc.) We also have to plan for how and when we will vacate each part of the house, and when we'll have to be out of the house entirely.

House is very engaging. If you have any interest in how houses are put together, or how teams work, there is a lot of meat here. While all the parties try to be tough negotiators at key times, they all want to end up with a beautiful house. Kidder builds a beautiful story out of the process.

Tuesday, July 31, 2007

Remodel step 1: asbestos removal

We're having a remodel done. We started the process last fall by refinancing the house, and setting aside part of the proceeds for a remodel. (Interest on money from a refinance is deductible if it is used for remodelling. We opened a separate account and are only using those funds for the remodel.)

A few months ago, we actually started planning the details of what we wanted to do, and hired a contractor, etc. If I had thought harder, I would have realized it was going to be an adventure and started writing about it earlier. Eventually, we'll have an additional bedroom, and a refurbished kitchen and master bathroom. I may write more details about the changes in floorplan later.

This past weekend was the first of the exciting parts: before the remodel starts, we needed to have the asbestos-tainted acoustic ceilings removed. We made an appointment a few weeks ago to have the crew show up yesterday, so we knew we had to have the house cleared out by Sunday night. The asbestos removal company said we needed to have all furniture out, all the art off the walls, the drapes and drapery hardward removed and the light fixtures down from the ceilings. We spent most of the day Friday, and all day Saturday and Sunday finishing up. We normally park our cars in the garage, so there's plenty of room to pile everything there. We'll park both cars in the driveway for the next few months.

Yesterday we caught up on sleep, did some errands, and stayed away from the house. When we went by after playing softball, we found that the asbestos removal was complete. Our one surprise was that they had removed the light fixtures, leaving dangling wires. Today I reconnected most of the light fixtures (some are in areas that will be remodelled soon, and aren't worth replacing) and replaced the drapes in the bedrooms.

The cleaners come on Thursday, and we want to have them mop all the floors (the asbestos removal left minor stains everywhere), then we'll set up the bedroom again, and gradually move back in the stuff we're willing to move 4 times. (Late in the remodel, all the hardwood floors will be refinished, and everything on the floor will have to be moved out again.) Everything else will stay in the garage until it's all done.

I'll add photos of the empty house and the clean ceilings shortly.

Thursday, July 26, 2007

Thomas Sowell: Black Rednecks and White Liberals

Thomas Sowell's Black Rednecks and White Liberals reads like a collection of essays. Once you get through the whole thing, it becomes evident that Sowell is marshalling many arguments toward a common point. Since he never says what the point is though, we may not all agree on the point, or even that there is one.

I think the point he's trying to make is that socialization has led America's Blacks into a backwater that stifles individual and group progress, and they need to abandon the culture individually in order to change things. There are chapters intended to show that individual progress is individually rewarding; that slavery isn't to blame for Blacks' current plight; that blacks have fared well in parts of the US in the past; and that culture can push members of a group in a common direction, but it doesn't determine outcomes.

Sowell addresses issues calmly in this book that nearly inevitably generate more heat than light. His publisher and many readers seem to give him more leeway to talk about the causes and effects of black culture than any white academic would be likely to receive. He uses the opportunity well to show how a culture that raises up sloth and an aversion to education will destroy any chance at progress, even of able members of the group. He spends a long time (and the title of the book) trying to show that the culture defended by Blacks isn't their own--Sowell traces its roots to poor white Crackers in Britain. It's not obvious that Sowell's derivation is correct, but if the argument starts the ball rolling on Blacks' coming to disown this disfunctional culture, he'll have done a major favor for all of us.

Two other sections stand out as having value for some audiences. (The history he presents wasn't news to me, but I suspect Sowell is right that Political Correctness is keeping the facts hidden from many people.) "The Real History of Slavery" points out that many other people and ethnic groups have been subject to slavery over the ages (and their descendants did fine a few generations later) in order to argue that it's unreasonable for Blacks to hold onto slavery as an explanation for their current troubles.

"Black Education" describes several high schools and colleges that have been able to routinely turn out educated and successful Blacks. Sowell shows that they did this while enrolling Blacks from all social strata, and didn't limit attendance to students who had already demonstrated academic competence. Sowell shows that in each case, the main difference between these successful schools and others at the time or other current schools was their expectations of the students and their approach to education. Schools with low expectations and lax methods don't produce superior results. Schools that accept inferior teachers (in the name of equal representation on the faculty) or that don't expel students who don't make an effort perform poorly. Sowell, quite explicitly, lays the blame for today's failing schools on "liberals" who insist that it is important to respect students' cultures even when the culture is opposed to achievement.

Friday, July 20, 2007

New Zocalo release; OpenSource JavaMail incorporated

I just published a new release of Zocalo that includes the ability for new users to register for an account. I'm writing this note to report on what I learned from integrating JavaMail into Zocalo about Sun's progress on Open Sourcing the Java libraries. More than a year ago, I started down the path to integrate email into Zocalo for new account creation. On the web today, if users are to create their own accounts without requiring intervention from an administrator, the application pretty much has to be able to send mail. (You can do it just with a captcha to ensure they're a real person, but doing a round-trip confirmation is better, since you can update forgotten passwords, and you end up with the ability to send transaction reports.)

What I discovered at the time was that Sun had announced that they were going to open source all the Java libraries, but hadn't made much progress. The mail libraries hadn't been freed up, and in order to use any of the third-party open email packages I could find, I would have had to require that people installing Zocalo manually download the base package from Sun themselves. This seemed like enough of an additional burden on the installation process that I punted. I actually implemented a rudimentary utility function that would invoke a local process to send mail. I know how to make this work on Linux machines, and it could probably have been coerced into being usable on Macintoshes, but I don't know what it would take to make it work on Windows. And then I never made any use of that functionality, since it didn't seem likely to work on most platforms.

In early July, I decided it was time to revisit the issue, and spent some time searching through the status of Sun's open source efforts. I was pleased and surprised to find that they've actually released a lot of it. (It might be all of Sun's libraries for all I know. I couldn't quickly find an overview, history, or description of the source of all this code.) It all seems to be gathered together at the GlassFish project.

I was able to find the JavaMail area reasonably easily, though figuring out what jars I would need was somewhat harder. And figuring out what the open source terms were took some reading. The license itself seems pretty impenetrable to me, but I was reassured by re-reading the Open Source Definition at Open Source.org. Since OpenSource.org thinks that Sun's CDDL meets their definition, I don't have any qualms about shipping the GlassFish JavaMail libraries with Zocalo.

I then downloaded one of the releases (GlassFish's release numbering is confusing to me; I couldn't easily figure out which version was the best stable release for basic operations), and tried adding one jar file at a time to Zocalo, to see how much I would need to get things running. The GlassFish documentation implies that you have to buy into their whole paradigm to get it to work. There's lots of J2EE stuff, and a JavaBeans Activation Framework, and I don't know what-all. All I want to do is send SMTP mail via someone else's server. The person installing Zocalo will have to specify an SMTP server, and give a password (unless there's something running on the local machine, or the SMTP server is accessed securely on a LAN, but I haven't set those up yet.) I could have included an SMTP server, but the person installing still has to make a secure connection to an external server; it isn't any easier if you're a server than if you're logging in as an SMTP client. In the end, all I needed was activation.jar and mail.jar. No extra XML configuration files, no extra VM parameters, no hassle. There are a few more configuration parameters in Zocalo's startup files, but whatever I did, you'd have to specify the SMTP server and password somewhere.

In the end I was extremely pleased with how easy it was to use Sun's recently released JavaMail package. The 2007.2 release of Zocalo Open Source Prediction Markets makes good use of it, and it wasn't too painful to incorporate.

Monday, June 18, 2007

Thomas Barnett: TED talk

Thomas Barnett (author of "The Pentagon's New Map") gave an entertaining talk for TED. I listened to it via podcast, but re-watched it later, since it was clear from the audio that it was pretty visual. The US has an unstoppable military force. But what comes after it works its magic? You can agree or disagree with his prescriptions, but at least it's a start on discussing a new strategy for the US Military. The book (and apparently his newer book) have a lot more detail and are well worth reading.)

Monday, May 28, 2007

David Warsh: Knowledge and the Wealth of Nations

David Warsh describes the develpment of economists' models of growth and progress in his book Knowledge and the Wealth of Nations . The story revolves around the origins and consequences of a particular paper ("Endogenous Technological Change" by Paul Romer, 1990) in order to give us a view into the way economists think and the way economic theory evolves.

The book spent more time on personalities and personality clashes that I would have preferred, but Warsh apparently wanted this book to say more about how economists work than about the ideas they've developed. As a result, you have to work a little harder to keep track of how economists' models of progress evolve from Adam Smith to the present.

It shouldn't be a surprise that models of the real world (in any field) usually start simple, and accumulate details as people discover areas where the models' predictions aren't sufficiently clear to answer questions that arise. The thing that's interesting and hard to track when looking backward is remembering where the gaps were, and the order in which the problems were addressed. Warsh traces the earliest descriptions of how business works, and how progress is made to Adam Smith and his description of the division of labor in his Pin Factory example.

But the earliest economists described everything in prose. When the models were formalized, they started out simple. Until quite recently, all formal models of production in an economy were static: they assumed the means of production didn't change over time. Often the eonomists who presented these models explicitly recognized that that was an important factor that was missing from their models, but they still had to start simple in order to have models they could manage.

The path of evolution of the models next added the idea of growth, but assumed that progress was constant and outside the influence of the manufacturers in the model. This allowed the economists to model and describe more sophisticated situations, but didn't match what people could see about how developing countries advanced over time. Romer's contribution to the field was to build a model that made entrepreneurial investments change the cost of doing business and the alternatives available to actors within the model.

Romer's model assumes that the technological advances produced as a side effect of investment are general knowledge (they are non-rival goods), but that some of the benefits can be monopolized by the developer for a time (they are partially excludable). One of the major consequences of this is that the model predicts that trade barriers prevent underdeveloped countries from advancing, and that borders that are open to trade all lesser developed nations to gradually catch up with their trading partners, since they can take advantage of the greater stock of knowledge in the market.

I've read several books over the last few years that point out that the idea of progress is failry recent in human history. Romer's contribution wasn't in noticing it, it was deciding it was important, and figuring out a way of bringing it into the models that economists use. Now that invention, discovery, and the sharing and hoarding of knowledge are explicit in the models, economists' recommendations to policy makers more often point in the direction of investment in education and new technology.

Along the way, Warsh presents an interesting history, and describes other ideas Romer and his colleagues were struggling with and how they led to the particular paths chosen. If you're interested in the history of this particular idea, or how economists (or scientists in general) work, it's an engaging book.

Tuesday, May 15, 2007

Judith Rich Harris: No Two Alike

Judith Rich Harris's book No Two Alike is a followup to her previous work, The Nurture Assumption. In her first book, Harris explained that most of the non-genetic affects on the personalities of adults are a result of their interactions with peers rather than with their parents. She pointed out that many people want to believe (and prove) that parents are the major source of their children's personalities. According to The Nurture Assumption, the field of sociology has been confused for several decades and has been trying to distinguish nature and nurture, when they needed to be either distinguishing the effects of heredity from environment, or disentangling the environmental influences which include both parents and peers.

The best tool, according to Harris, for distinguishing the effects of heredity and environment consists of studies of twins. Comparing twins raised together and twins raised apart controls for genetic affects and allows us to see the effect of gross differences in environment. Comparing fraternal twins raised together with identical twins raised together holds the gross environment constant and makes it easier to see what differences are purely genetic.

In the new book, Harris focuses on why twins are so different, in order to isolate the causes of differences that aren't explained by other results. The existing literature says that some proportion of personality differences are due to genetics, and some proportion by each of various environmental causes: parents, wealth, neighborhood, etc. But a significant amount of variation remains that isn't apparently caused by any of these. Her focal example is that even siamese twins have different personalities, and they share all of their genes, and all of the environmental influences that anyone could hope to treat as responsible.

Harris' conclusion (skipping over most of her argument for the moment) is that there must be something driving each of us to be unique, and that means we have to find a distinction to enhance. The bottom line is that a significant part of personality (who we are) isn't determined by factors that we can examine or control. Each individual starts out with an endowment of heredity, and occupies an environment that isn't fully under their control, but the developing personality is still a negotiation between that individual and their context. If the strongest part of their innate tendencies is best suited for a niche that is already filled, they will look for a second best. When two identical individuals struggle to fill the same niche, some factor (random or not) will eventually determine a winner in each particular event, and at some point the effects of competition, if nothing else, will drive them to exploit different strengths. The different choices and different results in competition will magnify any differences, and over a reasonable lifetime, they will become recognizably different people.

Along the way, Harris spends a good deal of effort (successfully) demolishing other possible explanations (differences in environment, combination of nature and nurture, gene-environment interactions, environmental differences within the family, gene-environment correlations, and transferability of learning between situations). At the end she argues that she has demolished all the other possibilities and provided an argument for the one remaining theory (an innate drive for status), and so it must be true. But her argument for the specific mechanism is a little too weak, and it seems plausible that some variation or related description will fit the data a little better. I'm reasonably convinced that something drives us to differentiate, but it may not be purely a status drive. Two possible variations on her theory include drives for attention or to master something.

I found the style of Harris' presentation sometimes compelling, and sometimes distracting. She fit the presentation into the framework of a detective story. The presentation is salted liberally with examples from popular detective stories to show how attentive the detective has to be to details that have distracted other investigators. This worked for me when I was familiar with the detective in question (Sherlock Holmes, Kinsey Milhone), and didn't work when I hadn't read the stories (Alan Grant). I suspect that Holmes is the only one of these that is widely enough known that other readers would feel that they should get the references even when they don't.

On the whole, I think the book was successful in explaining that fundamental differences in personality are effectively the result of an innate drive that causes us to differentiate. The drive makes use of arbitrary differences in the material it has to work with (genes and environment). Parents do make a difference in the lives their children lead, but the ultimate person each child becomes isn't determined by parenting style at any gross level.

Thursday, May 10, 2007

Safe Harbor for Prediction Markets

A group of distinguished economists wrote a public letter advocating a legal safe harbor for some small-scale academic prediction markets. I can see why they limited their goals as they did, and I agree that everything they advocated should be legal, but I think they may have limited their objectives just enough to prevent any big wins.

One thing that Prediction Market maven Chris Masse constantly argues is that Prediction Markets on dry subjects need to be accompanied by entertaining questions in order to to keep the audience's attention. The economists had good reasons for shying away from recommending that sports betting should be included, but there are many other topics that diverse markets could include that give traders a reason to check back in. The range from the obvious entertainment questions (movie earnings and oscar winners) to legislative outcomes (bills passing and control of particular legislative bodies) and introduction and market success of new technologies. While these kinds of questions might be out of place on some single-topic markets modelled after the University of Iowa's markets on elections, the internal corporate markets that they also mentioned often use them to help maintain interest. The letter's recommendations that the CFTC "allow contracts that price an economically meaningful risk or uncertainty" unnecessarily limits the kinds of contracts that would be allowed.

Back on the side of supporting the letter's authors again, I'd have to admit that if the CFTC or Congress acts to implement anything resembling the recommendation it would very likely increase Prediction Market activity greatly, and eventually lead to a broader acceptance. If the initial definition is too narrow, however, questions that don't have clear economic implications (in the view of Congress and the regulators) might be stuck offshore for a long time to come.

Friday, May 04, 2007

Helping People find Me

Every once in a while, someone contacts me and says they had been trying to reach me for a while, but had an old email address. Then I find out that the address they have is one I haven't used for a decade or more. Brian Warner suggested that people who have several old, defunct addresses should publish them on the web someplace where Google might find and index them, so people who still have the old address will be able to get back in touch. I set up such a redirector about a month ago, and my old addresses page now comes up as the first result for most of my old addresses. It hasn't yet succeeded in helping someone reach me who couldn't any other way, but it might soon, since some of the easiest places to find my name don't provide working addresses any more.

Wednesday, May 02, 2007

Cato on Employment Verification

Jim Harper of the Cato Institute testified at the House's Hearing on Proposals to Improve the Electronic Employment Verification System last week, and made some very important points about privacy and government intrusiveness. Here are some quotes. There is much more valuable material in the complete text.

[I]mprovements that prevent eligible citizens from working should not be adopted. It is more important that American citizens and eligible people should be able to work than it is to exclude illegal aliens from working. There is probably no way to change the current system so that it prevents more ineligible people from working without also preventing more citizens and eligible people from working. [...]

The policy that will dissipate the need for electronic verification by fostering legality is aligning immigration law with the economic interests of the American people. [...]

Because the I-9 process and employer sanctions seek to defeat their economic interests, the system has two principle opponents: employers and workers. It relies on them for implementation, though, which is why success has been so elusive and will continue to be. [...]

Let there be no illusion that people seeking redress for a "tentative nonconfirmation" from the Social Security Administration or the Department of Homeland Security will enjoy a pleasant, speedy process. [...] People will wait in line for hours to access bureaucrats that are not terribly interested in getting them approved for employment. [...]

Electronic verification would have far greater privacy consequences than the current system — and these consequences would fall on American citizens, not on illegal immigrants. [...]

Ironically, all of this government spending and expanded bureaucracy would go toward preventing productive exchanges between employers and workers. Taxes and spending would rise to help stifle U.S. economic growth. Astounding.

Sunday, April 22, 2007

Cell Phones in the Third World

This year's Edge question: "What are you optimistic about?" generated more responses than earlier years' questions, so it has been taking me a while to get through the whole thing. The best line I've read so far (I'm on page 13 of 16) is by Sandy Pentland of MIT:
The International Telecommunications Union estimates that in the poorest countries each additional cell phone installed adds $3000 to the GDP, primarily due to the increased efficiency of business processes.

Sunday, April 01, 2007

Lee Smolin, The Trouble with Physics

Lee Smolin's book The Trouble with Physics is a very well written overview of the state of modern physics. I've been trying to keep up with String Theory, Quantum Theory, wormholes, and all the rest for quite some time, but until now I hadn't noticed that I was looking at trees and didn't know which forest any of them were in. Smolin's book gives a feeling for the lay of the land while he sets about laying waste to String Theory, which has been the great hope for a final unified theory for 25 years or so. I want to come back to the survey, but I'll cover what Smolin has to say about String Theory first, since that was his focus.

String Theory, it turns out, is a family of theories which matches what we know about the universe in some telling details, but which has too many loose ends to give specific predictions on any of the things we know we don't know. The basic idea gives an outline for what a more specific theory would look like, but there's nothing in the outline that gives a rigorous suggestion for how to fill in about thirty parameters that it leaves open. People in the field have been expecting that if someone does the right experiment or makes the right guess about one parameter, that the equations will fall into place, and the rest of the implications will be obvious. But it hasn't happened yet, and the brightest people in physics have been pursuing this path for a couple of decades.

One of Smolin's simplest demonstrations that the field has been unproductive recently is a description of the steady progress that has been made in fundamental physics for more than 200 years. From 1780 through 1980, there wasn't a quarter century that passed without important new contributions to our understanding of the basics of matter, time, energy, and motion. But most of what has been added since 1980 is String Theory, along with Dark Matter and Dark Energy. But Dark Matter and Dark Energy are questions, not answers and String Theory doesn't make any predictions that aren't made by the Standard Model, which was completed in 1973. String Theory also doesn't seem able to produce a theory that incorporates the background independence that general relativity requires.

Smolin is convinced that a major cause of the problem is that a form of groupthink has enveloped all of theoretical physics, enforced (probably unconsciously) by the oldest generation of theorists. He shows that the standard signs are all present, and shows how it has restricted researchers' choices of what to areas to explore. It's a plausible charge; I've seen compelling cases in at least two other fields that similar dynamics prevented progress for decades at a time.

Smolin ends with a plea that more research effort be funded in a variety of areas that are at odds with mainstream String Theory. He does a reasonable job of showing that they have some promise, though I'll admit that I'm way out of my depth by the time he gets here.

But I was really impressed by the first section of the book, in which Smolin presented all of fundamental physics from Copernicus, Bruno, and Galileo to the present day in a very approachable format. The focus was consistently on what pre-existing concepts were brought together in one of two ways. Sometimes the unification shows that two familiar things that are thought of as distinct are really the same thing, giving a deeper theory of both. (the Earth is one planet among several, the Sun is one star among many.) Other times, two phenomena that weren't understood well are explained as one common thing (Bacon showed that heat is a kind of motion; Newton showed that gravity explained both planetary orbits and ballistic trajectories; Maxwell showed that electricity and magnetism are the different aspects of the same phenomenon.) All of this is stuff that I have understood since high school, but viewing it this way gives me a hook to hang a few more things on. The rest of physics constitutes theories that I've been able to understand at a surface level, but which have never been integrated in any deep way. Smolin's framework makes it possible for me to hold a few more theories in my head and see how they give shape to the entire universe.

Einstein was responsible for the next three major unifications. Special relativity forces space and time to be interchangeable in order for the speed of light to be an observer-independent limit. (Mass and energy, too, must be epiphenomena of some more fundamental construct.) General relativity showed that gravity is indistinguishable from acceleration, and that space has no geometry independent of the mater and energy that it contains. These are still a little hard for me to grok completely, but I feel much more knowledgeable about them post-Smolin. Previously, I was satisfied with being familiar with the equations, even though I didn't really understand what they implied about the universe. Now I feel like I understand them as facts about the universe, though of course it's a stretch to claim I understand how they fit together.

Even with QED, QCD, supergeometry, and the two DSRs, I am much happier that I understand what they are theories about than I was before. I've read several books on the frontiers of modern physics, but I'd have been hard pressed to say, more than a month after putting each down, what they implied about the universe. With Smolin's help, I now feel like I understand what the point of most of these theories are, and what it would mean to bring them together.

Tuesday, March 27, 2007

Prometheus finalists announced

This year's Prometheus finalists have been announced. The links from the titles are to my reviews.

The finalists for Best Novel are: * Empire, by Orson Scott Card * The Ghost Brigades, by John Scalzi * Glasshouse, Charles Stross * Rainbows End, by Vernor Vinge * Harbingers, by F. Paul Wilson

And the finalists for Hall of Fame are: * A Clockwork Orange, a novel (1963) by Anthony Burgess * "As Easy as A.B.C.," short story (1912) by Rudyard Kipling * It Can't Happen Here, a novel (1936) by Sinclair Lewis * Animal Farm, a novel (1946) by George Orwell * The Lord of the Rings, a trilogy of novels (1954) by J.R.R. Tolkien * "True Names," a novella (1981) by Vernor Vinge

I haven't written reviews of any of the hall of fame nominees, though I've read them all. Perhaps that's a sign that I should read them again.

Saturday, March 24, 2007

Robert A. Heinlein and Spider Robinson: Variable Star

The author of Variable Star is listed as Robert Heinlein and Spider Robinson, but it's really a recently discovered outline by Heinlein that Robinson has turned into a novel. The story has a very anachronistic feel to it, with many heinleinian plot points (it appears to be set in Hienlein's Future History) but references to recent events and relatively modern mores.

I don't much care for stories with a reluctant hero, and Joel Johnston has to be dragged, kicking and screaming, through each of his life transitions. He's actually a fairly decent character in between, but the switch-overs are wrenching.

The story was a nominee for the Prometheus award, but wasn't chosen as a finalist. It doesn't have much to say about freedom or self-direction; it's not a cautionary tale; the governments are benign and consensual or required by circumstances (a 500 person starship has to have a captain and someone to keep the peace) and minimal, but in either case, neither something to fight against or to hold out as a model. The characters are pushed around by fate and circumstance.

The story takes so many hairpin turns that there's not much else I can say without spoiling the plot. Talking about anything after the first 50 pages would tell way too much about the most surprising development, and after that, things are pretty much pre-ordained other than the details. I'd rather read a real Heinlein juvenile, and make allowances for Heinlein's atavatistic viewpoint or a modern story by Robinson (with all the puns) than this mishmash.

Thursday, March 15, 2007

Would you Trade your Sense of Smell for a Longer Life?

Recent experiments on fruit flies undergoing Caloric Restriction seem to show that the effect of CR is reduced or eliminated if the flies are exposed to the odor of food (yeast in their case). That led to experiments in which the experimenters genetically eliminated the flies' ability to smell. They found a stronger contribution to lifespan from smelling the food than from how much the flies ate.

Would you give up your sense of smell if it meant living 50% longer? The question is only hypothetical at this point: replications in other species haven't been reported yet.

Saturday, March 10, 2007

F. Paul Wilson, Harbingers

F. Paul Wilson's Harbingers is a nominee for the Prometheus award. I understand the impulse to nominate the Repairman Jack novels for the Prometheus: Jack is a strong character who fights evil and eschews all contact with the government while living in New York City. His attitude toward self-help and antipathy to gun control also help his case. It's probably the fact that it's horror, and that both the cosmic bad guys and the cosmic okay guys are inscrutable that drives me away.

With this installment, I realized that there was another aspect that was bugging me. In the past 9 novels, little has changed about Jack's relationship to the two warring factions that are thoughtlessly interfering with Jack's life. Harbingers, on the other hand, contains several significant developments in that relationship. In fact, it reads like the second volume of a trilogy. Too bad it took Wilson 9 books (plus 6 others set in the Adversary world) to get us here.

But overall, I don't find the libertarian themes to predominate over the standard horror tropes. And inscrutable powers for whom we barely even qualify as pawns aren't much of a recommendation for a libertarian point of view.

In its defense, this book does have Jack standing up to the Ally after it attempts to take away what he most cherishes. But the negotiation ends with the Ally seeming to hint it won't hurt Jack's family for as long as he does what it wants. Not much of a recommendation.

Wednesday, March 07, 2007

Orson Scott Card: Empire

Orson Scott Card's Empire has been nominated for the Prometheus award. It stands a chance as a cautionary tale like Sinclair Lewis's It Can't Happen Here. The story covers a plot (with tentacles in the White House) to overthrow the government and bring about a left-wing coup. The action is intense, but the politics is strained. Like Lewis' novel, the point (as Card explains in an afterward) is to convince us that it wouldn't take much to turn our current heavily factionalized politics to open warfare.

The story reveals that far-left and far-right groups have been preparing plans in secret for years to take control of the government violently if it should become necessary (because the other side has gotten too much control.) The leftist radicals apparently have the advantage, since they have an extremely wealthy fanatical backer who has been funding the development of new high-tech weapons and staffing and training an army to use them. This part unfortunately reflects the fact that the backers of Card's project are game developers; the new weapons are several steps ahead of what the uniformed military has access to, but would play well on the video game consoles.

The viewpoint characters are sympathetic, well-trained special ops soldiers, and are patriotic to a fault. Their loyalty is to the nation and its legitimate leaders. The action is intense. But ultimately, the story falls short of being a cautionary tale. The motives of extremists on both side are hard to fathom; they want control of the levers of power to prevent the other sides extremists from doing something bad, but it's not clear what. After the conspirators take control of New York City and get sympathetic resolutions from a few state legislatures, they don't make any visible changes in anyone's lives.

At least Sinclair Lewis showed that the fascists who took over would have to suppress dissent brutally and ruthlessly in order to maintain control. In Card's novel, life goes on as before. At the end of the novel the viewpoint characters suspect the newly elected president of having helped orchestrate the entire coup in order to ensure popular support for his own election as a peacemaker. But they can't find any indication that he has dastardly designs to justify the conspiracy they are looking for.

The attempt at a coup and rebellion were short-lived, caused only a handful of deaths and a moderate amount of damage, and didn't have any impact on everyday life. As a cautionary tale, it's not much to worry about. If the protagonists hadn't done such a good job of protecting the nation, an extremist megalomaniac might have taken over the government, but Card's novel doesn't explain why that was something to worry about.

Tuesday, March 06, 2007

Conditional and Combinatorial Betting

After people have used Prediction Markets for a while and have gotten used to their ability to provide forecasts, they start thinking about different scenarios. Who would be the best Republican to face Clinton? How are the prospects for a market boom or crash effected by the winner of the election? How will poverty be affected by a proposed World Bank program? These kinds of questions can be posed in a number of ways using Prediction Markets. Markets can allow betting on conditional (if) or conjunctive (and) questions. Either one can be used to answer the what if questions, but they provide different choices to the bettors, and some make it easier for observers to decode the answers.

The easiest compound question to pose is a simple conjunction of two others. InTrade had separate markets in whether Bush would be reelected in 2004 ("BUSH"), and whether Osama bin Laden ("OSAMA") would be captured before the election. Justin Wolfers and Eric Zitzewitz asked InTrade to add a single combined contract that would pay off if both came true. Their paper, Experimental Political Betting Markets and the 2004 Election shows how the prices on these three contracts can be combined to show how one event would be likely to effect the other.

InTrade created three separate claims to cover combinations of the two base questions. They were "Bush wins election" (BUSH), "Osama is captured before the election" (OSAMA), and the combination: BUSH&OSAMA which would have paid out if both the others came true. Wolfers and Zitzewitz estimated the market's conditional probability by comparing the price of OSAMA with the price of BUSH&OSAMA. If the price levels were rational, the difference between the two prices had to equal the chance that Osama would be captured and Bush would not be reelected. Since the market price of BUSH&OSAMA was 91% as high as the price of Osama, they concluded that that represented the conditional probability. A weakness of this conclusion is that while investors and arbitrageurs have an incentive to ensure that the price of BUSH is correct relative to ~BUSH, (and OSAMA with respect to ~OSAMA), there's no bet that lets an arbitrageur exploit superior knowledge of the conditional probabilities.

Sometimes investors believe they know how one outcome will effect another, and want to bet directly on that linkage. If you were confident before the election that Osama's capture would raise the probability of Bush's reelection to 95% (above the level the the market prices implied), having the conjunctive bets didn't provide a bet that would have looked beneficial to you. You might think you could buy Bush&Osama (because you believe Bush's chances are improved if Osama is captured) and sell ~Bush&Osama (because this is the outcome your view says is least likely), but you'd lose both bets if Osama wasn't captured (which is an outcome your prediction doesn't specify.)

Conjunctive claims allow observers to deduce connections between claims, but since the investors aren't directly rewarded based on the conditional probabilities, they have little incentive to ensure that the implicit conditional probabilities reflect their understanding of the connections between the outcomes. In order to evaluate different proposals we have to look at what investors would spend up-front, and then compare the possible outcomes and how the investor's earnings change in each situation.

If Bush is a 60% favorite to be re-elected, and the market thinks there's only a 10% chance Osama will be captured before the election, the odds on the conjunctions might be:

 Bush reelectedBush defeated
Osama captured .09 .009
Osama free .5 .4

If you think Osama's capture would improve Bush's prospects to 95%, what should you buy or sell? Your prediction says that the ratio of Bush&Osama to ~Bush&Osama should be 19:1, but doesn't have anything to say about Bush&~Osama or ~Bush&~Osama. If you buy Bush&Osama and sell ~Bush&Osama, you can make the prices match your beliefs better, but you'll lose money if Osama isn't captured. In order to support conditional bets directly, market operators have to find ways to allow traders to buy positions without exposing themselves to risks due to the independent cases.

A contract that acts like a conditional bet directly (written as BUSH|OSAMA, pronounced as "Bush given Osama" or "Bush conditional on Osama") would pay off if Bush is elected, and return your investment if Osama bin Laden isn't captured. That gives investors the right incentive.

 Bush reelectedBush defeated
Osama capturedGain $1 Lose investment
Osama free Return investmentReturn investment

In order to support betting on conditional probabilities, the bets have to be able to return the investors' money in particular cases. I know of three detailed proposals that have this property. They are: betting on arbitrary boolean expressions, representing the complete cross-product of possible outcomes (providing a complete set of Arrow-Debreu securities), and using the independent claim as currency for purchasing the dependent claim. There are two additional suggestions that might work, but haven't been written down in sufficient detail to be sure.

Robin described and implemented Combinatorial Information Markets which represent probabilities and traders assets explicitly for all possible combinations of outcomes. Fortnow, Kilian, Pennock, and Wellman described how you might try to support bets on arbitrary boolean combinations of conditions. Their conclusion seemed to be that solving the general problem would be computationally infeasible. They didn't describe how to address the problems they found, but I think it's possible that a market that supported only binary combinations could be designed. And finally, Peter McCluskey built (and released as open source) USIFEX in 1999. It allows the user to use the coupons of the independent event as the currency. This combination allows traders to express conditionals directly. Unfortunately, that system didn't attract a user base quickly enough, and Peter stopped development soon after the initial release.

For an article on Decision Markets written in 1999, Robin Hanson suggested creating markets using assets that pay off in "units of A if B passes" (and "... if B doesn't pass."), and allow traders to trade the assets for each other. The price of A|B in terms of B (which can be built from component assets) expresses the conditional bet. Robin didn't explain how to set up a market in which people trade assets for assets and didn't describe how to let the users see how various combination bets would express the conditional claims they might have been interested in. (This is the first of the two incomplete suggestions.)

Robin's Combinatorial Information Market design uses a complex internal representation and can support arbitrary conditional bets. He built a prototype implementation that allows the user to explore these conditionals by choosing assumptions, and then adjusting probabilities in the resulting hypothetical situations. I wrote a prototype of my own in E. Neither prototype is more than a proof-of-concept that the institution works, and neither has been operated for any general market. The strength of this approach is that users can express conditional connections between arbitrary claims; this aspect has been shown to be effective in a laboratory experiment. Robin ran tests of this market after he proposed its use for PAM, and there were apparently no problems in running it with 6 traders estimating all outcome combinations for 8 events. The glaring weakness is that it doesn't scale well. It's not clear how to build a version that would work even with a market with dozens of questions and hundreds of users. I'll describe this market in more detail in a future post in this series.

Peter McCluskey built USIFEX in 1999. It works quite differently and doesn't seem to have the performance problems of the other proposals. The primary idea for supporting conditional trading is that you buy units of A|B using units of B as currency when betting on a conditional question. The effect is that when buying A|B, you end up with coupons of ~B as part of the purchase, and that's what ensures you'll be repaid if the independent event doesn't occur. USIFEX is open source, but it hasn't been maintained since it was released in 2000. The code was resurrected for use in the Swiss MarMix exchange, (AFAICT without making any use of the conditional betting features). The biggest weakness of Peter's approach, as I recall, was that it would have taken a lot of users to ensure that the conditional markets weren't extremely thin. A longer description of USIFEX is also in the works.

Todd Proebsting built an implementation of the Hanson design that works without conditionals. Dave Pennock wrote up a description of Todd's approach, focused on the Market maker. I intend to describe the implications of Todd's approach for betting on conditionals in a future post. (This is the second incomplete suggestion.) I think it might be straightforward to extend Todd's approach to support conditional betting without running into the exponential growth of Robin's solution. The drawback is that the market operator has to separately capitalize and enable every conditional question that you want the system to support, while Robin's approach enables all of them by default. It's also possible that Zocalo Open Source Prediction Market software would be compatible with this approach, where it's clear that Zocalo would require substantial modification to support the Hanson proposal.

Other Articles in this series