Thursday, December 07, 2006

Edward Castronova: Synthetic Worlds

Edward Castronova's Synthetic Worlds provides a good explanation of why social spaces constructed in software will be an important part of our future. But first, Castronova takes the time to give us a feel for what it's like to spend time there, so that we'll understand the inhabitants and what they do while online.

Castronova has a very engaging style; I particularly liked the way he keeps the reader apprised of the roadmap he is following and how each chapter and major section fit into the exposition. Castronova is an economist, but he didn't get into this subject expecting to prove an an economic theory—he was just playing games. After he had spent a significant amount of time in several social games, he thought of writing a tongue-in-cheek report on the economies he'd visited. But as he gathered enough data to lend verisimilitude to the joke, he found more and more depth to the real economic interactions going on inside the worlds and in external sites where people were selling in-world artifacts and identities for significant sums. He did eventually write the paper, though from a more serious viewpoint than he had originally envisioned. Within 6 months, it ended up being on of the most read papers available from SSRN, a major repository for serious academic work.

Since then Castronova has been the go-to guy for a serious social science view of these game worlds. He wrote this book to explain what he has learned. His most important conclusion is that the economic and social consequences of what transpires within these systems is real, and so it doesn't make any sense to call them virtual worlds. Not virtual reality, not virtual economies, not virtual goods, not virtual interactions. The interactions are real; the goods have value in the real world; the economies work just like the real world and they trade goods, services, and money across borders with real world economies. It's a real reality and events there have real effects on the inhabitants and everyone else in the same way we can be effected by the weather in the Gulf Coast or a fire or an earthquake in the Far East.

As Moore's law continues to increase computational power, these worlds will become attractive to more people, and more people will spend increasing amounts of time and productive effort there. Read Synthetic World for a glimpse of how it may effect you.

Thursday, November 30, 2006

David Friedman, Harald

David Friedman's novel Harald has been nominated for the Prometheus award. It will probably be a finalist, but the libertarianism is muted. It's an enjoyable read, though not deep or with a very broad scope. Harald is accepted as a leader among his people, though they don't have any formal government. He is wealthy, and a brilliant (unerring) strategist and tactition. Actually, that's the biggest weakness of the book; Harald always out-plans the opposition. When they occasionally try to think one step ahead of him, those are the occasions on which Harald has planned two steps ahead. Harald is also an accomplished field doctor, though no one else seems to even be familiar with the rudiments of first aid. He knows a story for every occasion, and is a charismatic leader. For some reason his extraordinary abilities stand out even compared to the standard hero stories we're used to.

But if you're willing to forgive this conceit, it's a good adventure story, with plenty of pitched battles, a few battles won by stealth, and a plausible depiction of how a society without government might defend itself. Unfortunately, we can't tell whether it would work if they couldn't count on the constant attention of a superior general. Harald is alway monitoring developments, and imagining what his old enemies might do if they were to decide to attack again.

There are plenty of incidents showing people making choices freely, and bearing the consequences of their choices. No government intervention, except among the citizens of the emperor. Even those who live under kings seem to be allowed to live their own lives, and choose to accept the protection of the local ruler as long as it's worth the cost.

In this society, women are warriors on a par with men, though they maintain their own separate force ("The Order"), and join the battle only when their leaders decide that their interests are at stake. The conflict starts when the young king tries to take control of The Order to ensure that they will help him if the empire attacks. If Harald hadn't stepped in, the king would have ended up with a rebellion, cutting his forces rather than augmenting them. Harald shows him that persuasion works better than force.

Sunday, November 05, 2006

John Scalzi, The Ghost Brigades

John Scalzi's The Ghost Brigades is likely to be a strong candidate for this year's Prometheus award. It may be the best I've read so far. (Since the competition includes Vinge, a past winner, and Stross, a past nominee, that's an achievement.)

Humankind is one of many species competing for living space around the galaxy. There's a little cooperation, and a lot of war. Our government is keeping most of the population in the dark about who our friends and enemies are, and how we're fighting them. Our best weapon is an army of vat-grown, genetically enhanced soldiers who are effectively brainwashed slave labor.

The conflict arises when Charles Boutin, the genius scientist who has helped develop the technologies, becomes convinced that the government was careless about protecting his wife and daughter, and in his grief, lends his assistance to some of humanity's enemies. In order to help track him down, his memory backup is loaded into the mind of Jared Dirac, a custom-designed soldier. Since the mind transplant doesn't take at first, Dirac develops his own personality, with idiosyncratic quirks and abilities. This isn't on the program for the enhanced soldiers, which results in a lot of trouble.

Many of the tropes of near-future technological enhancements are on display here: mind melding soldiers, nano-suits that protect the wearer from minor injury, instant access to information. Scalzi does a decent job of merging them into a plausible society: Dirac is as likely to use his tools and skills while joking around with his buddies as he does in battle.

The deeper issues include Dirac and the other soldier's ability to make choices and control their own fate, the moral issues surrounding combatants and bystanders in war, and the morality of allowing population pressures to force the choice of going to war. Scalzi lets Dirac and his fellow soldiers explore the issues without forcing particular answers on them or us.

I liked the answers Dirac came up with better than those Ken Chinran came up with in Michael Williamson's The Weapon. Chinran was a nearly omnipotent military force on his own. He accepted his assignments without question, carried them out as best he could and worried about ethics after the fighting was done. Chinran sometimes made morally doubtful tactical choices in the heat of battle that undermined his strategic objectives, and ended up several time regretting his choices. But he never learned to make better choices in battle.

Dirac considers the possibilities as he proceeds, and limits his tactical choices to behaviors he has already decided are morally acceptable acts of war. In one incident, Dirac and his squad are tasked with abducting the immature heir to the throne of one of humanity's enemies, the Eneshan. The squad recognizes that the morality is questionable. Some members, while willing to participate in the raid, ask to be left out of the dirty work, so the squad leader asks for volunteers. Dirac recognizes it as dirty, but accepts the necessity in a time of war. The important point for the story is that Dirac and his companions are making moral choices, even though they weren't given any choice about being soldiers.

Dirac continues to make moral choices right through the end.

Wednesday, November 01, 2006

ID Theft Report Discourages use of SSNs

The front page of the latest issue of Privacy Journal has the headline "Feds Now Discourage Use of SSNs". The article reports on the interim recommendations from an interagency task force on identity theft. Privacy Journal focuses on the report as the first indication that the federal government finally recognizes the danger that SSNs pose, and may start to take steps to remedy the harm they've caused by helping make SSNs ubiquitous in government and many private databases.

This report, by itself, doesn't change government policy, so you can't cite it as authority when trying to get assistance from government agencies without revealing your SSN. But you can refer to it when working to convince administrators that agency policy should change or that training of the people who collect information from the public should be updated. I suspect that it would also serve as useful ammunition when arguing with people in private industry. The report isn't directly addressed to them, and doesn't hold any legal authority, but it is a recommendation from the government, and it does represent a significant change of heart.

  • Recommendation 1: OMB should provide guidance to all federal agencies about giving notice in the event of data breaches.
  • Recommendation 2: OMB and DHS should identify best practices and mistakes to avoid.
  • Recommendation 3: on SSNs
    1. OPM should accelerate its review of the use of SSNs in its collection of human resource data, and take steps to reduce their use (including the assignment of employee identification numbers).
    2. The commentary suggests that agencies assign employee ID numbers to replace SSNs. They also suggest that Executive Order 9397 (which encouraged use of SSNs in Federal databases) might need to be "partially rescinded" in order to reduce use of SSNs.

    3. OPM should develop and issue policy guidance to the federal HR community on the appropriate use of SSNs in employee records, including the proper way to restrict, conceal, or mask SSNs.
    4. OMB should require all federal agencies to review their use of SSNs to determine which uses can be reduced or eliminated.
  • Recommendation 4: All agencies should add disclosure of information in response to a data breach to their published "routine use" list under the Privacy Act.
  • Recommendation 5: The task force should investigate reliable methods of authenticating individuals to reduce openings for identity thieves.
  • Recommendation 6: Congress should add restitution for time spent remediating harm from identity theft to the criminal statutes.
  • Recommendation 7: The FTC will develop standardized forms for reporting identity theft to police.

Sunday, October 29, 2006

Vernor Vinge: Rainbows End

Vernor Vinge's Rainbows End is a near future story, centered around the same San Diego high school as his Hugo-winning 2002 novella Fast Times at Fairmont High. This is a fun story to read, but not up to the standards of his galaxy-spanning stories. As a matter of fact, I thought Fast Times was more effective, though it's a trope that it's easier to write a punchy short story than an effective full-length novel.

In Rainbows End, we watch Robert Gu recover from alzheimer's after getting a newly introduced therapy. As Gu regains his mental faculties, he realizes that he's lost some of his strengths, and that he'll have to develop new abilities in order to cope with the world as it has become. The bout of alzheimer's provides a nice plot device to allow Gu to skip forward in time, missing a period of technological development and having to catch up. In Vinge's future, society has caught on to the fact of Future Shock, and provides special classes to give people who have slept through the accelerating change a chance to catch up. It's not quite a singularity, but there are certainly many people who were left behind and have realized that coping with daily life requires more familiarity with modern technology (consensual reality, ubiquitous private instant messaging, sophisticated wearables, tools and transportation that can only be controlled from personal remotes that everyone is assumed to carry.)

The large-scale conflict that drives the story concerns electronic security, and a biotech development that threatens people around the world. The plotters are a global cabal, with their fingers in every pie and the ability to subvert many of the underlying technologies. Gu's circle includes counter-terrorism specialists as well as loving family members who use the ubiquitous surveillance technology activities to follow his activities when there are hints that he's involved in events beyond his recovering abilities.

Libertarians will find much to ponder in the ubiquitous surveillance, the government's fundamental control of the foundations of technologies, and the shape of this not-too-unlikely future. But most of this is simply background for the story here, and Vinge doesn't bother to say or even imply much about the implications for freedom or personal responsibility. Some of the plotters seem to be able to pursue the plots because of the power they get from their role in government, but they are stopped by people who wield government power wisely and benignly.

This is a good book, but not a great one. I don't think it will be a strong contender for the Prometheus Award this year.

Thursday, October 26, 2006

Edward O. Wilson: On Human Nature

Edward O. Wilson's On Human Nature is an enjoyable read. (It won a Pulitzer Prize, so this is no surprise.) Even though it was written in 1978, it continues to provide a good overview of much that is still held to be true about human biology and sociology. There are chapters addressing Heridity, Agression, Sex, Altruism, and Religion. Wilson is a gifted writer, and can explain subtle concepts with clarity. The opening chapter posits that we are evolved creatures, and that our brains are effectively machines constructed out of billions of nerve cells that bottom out in chemical and electrical interactions. With this as context, Wilson says that the central dilemmas of our existence are that we have no pre-established goals, and that our development of morality on a scientific basis has been short-circuited by the fact that much of our ethical instincts are inculcated by our heridity and environment. In order to understand how we can have goals beyond those evolution set for us, or understand morality in any depth, we first have to understand the biases that evolution has built into us.

The heart of the book is an exploration of what human nature consists of. Wilson provides clear contrasts with the many other creatures (from insects to higher mammals) that he has studied. He points out the myriad ways that social behavior differs across species, both to show how different thinking creatures might be, and to provide context for an argument that the "natural" drives we have evolved shouldn't be treated as guides to correct behavior. If religion can be systematically analyzed and explained as a product of the brain's evolution, its power as an external source of morality will be gone forever. He follows that with this statement of principals:

The core of scientific materialism is the evolutionary epic. Let me repeat its minimum claims: that the laws of the physical sciences are consistent with those of the biological and social sciences and can be linked in chains of causal explanation; that life and mind have a physical basis; that the world as we know it has evolved from earlier worlds obedient to the same laws; and that the visible universe today is everywhere subject to these materialist explanations.

The one thing I would fault Wilson for is for not addressing his first dilemma more strongly. I think he did a good job of showing that religion and our instincts are not a sufficient basis for establishing goals for us to pursue. But that doesn't leave us adrift in an uncaring cosmos; the first task of any maturing person is to discover or invent their own goals. Morality and ethics provide boundaries on that search, but everyone has to find their own destination. Wilson instead falls back on a shared goal of progress and scientific exploration. I do admire his eloquence. This is how he closes the book:

The true Promethean spirit of science means to liberate man by giving him knowledge and some measure of dominion over the physical environment. But at another level, and in a new age, it also constructs the mythology of scientific materialism, guided by the corrective devices of the scientific method, addressed with precise and deliberately affective appeal to the depest needs of human nature, and kept strong by the blind hopes that the journey on which we are now embarked will be farther and better than the one just completed.

Wednesday, October 25, 2006

Epigenetics and Methylation

Science News (June 24, subscriber-only) had a nice feature article on Epigenetics and methylation's role. I thought their explanation of how the epigenetic marks attach to DNA was particularly clear. I've read about methylation before, but never come across a description of the mechanism.

As early as the 1940s, researchers who couldn't explain some of an organism's attributes by straightforward Mendelian genetics started calling these aberrant traits epigenetic, says Randy Jirty, a researcher who studies gene control at Duke University in Durham, N.C. "'Epigenetics' literally means 'above the genome,'" he explains.

Scientists eventually learned how apt the name was. Inspecting the double helix turned up hundreds of thousands of what scientists colloquially call "marks"—places where DNA is tagged with carbon and hydrogen bundles known as methyl groups. Enzymes attach methyl groups only at points on the genome where two DNA components—cytosine and guanine—meet. These components often cluster near the beginning of a gene, where proteins attach to turn on genes. If a methyl group blocks a protein from binding, the gene typically stays switched off.

In recent years, scientists have learned that methylation isn't the only mark that changes whether genes are expressed. Various chemical groups clip on to histones, the spools around which DNA wraps when it condenses into chromosomes. These groups can affect how tightly DNA is packed. Although histone modification is not as well studied as methylation, researchers have shown that genes on loosely packed DNA are more likely to be expressed than are those on DNA that's tightly wound.

Most of these epigenetic marks are set by cells long before an animal's birth, says Jirtle. Each type of cell, from liver to skin to muscle, carries a distinct pattern of methylation and histone modification that, for the long term, switch genes on or off in the pattern necessary for the cell to do its job.

However, Jirtle adds, not all of these marks are set in stone. Outside factors during development can change which DNA segments are epigenetically modified, setting the stage for traits that linger into adulthood.

Jirtle's group did some studies with mice whose coat color can be changed epigenetically according to whether their diet is enriched or impoverished with methylation-inducing supplements. The supplements also affect disease susceptibility. They can prove that methylation mediates the signals because they can trace the presence of the methyl groups in subjects that receive different treatments, and they can change the signals by administering drugs that add or remove methyl groups selectively.

The interesting consequence of epigenetics via methylation is that parents and uterine environment have some control of the epigenetic markers of offspring separate from the DNA's message. Recent work is showing that the epigenetic markers can also be changed throughout an organism's lifespan, changing susceptibility to diseases and predisposing other somatic effects. None of this is surprising; I'm describing it merely to provide background for anyone who wouldn't otherwise understand the context. The benefit here is the clear explanation of how the methyl decoration works. Wikipedia has a more in-depth explanation and provides more discussion of consequences. I still like Science News' simple clarity.

Sunday, October 22, 2006

Space Elevator Challenge Results

The Second Annual X Prize Cup was held in Las Cruces New Mexico this weekend. 4 events related to space development were scheduled: the Space Elevator challenge in two parts (a climber competition and a tether competition); a lander challenge to develop a rocket-propelled autonomous vehicle that can take off, hover, then land again; and an exhibition for the Rocket Racing League. The engine for the Racing Rocket wasn't ready, so the craft was unveiled but not flown. I didn't find any reports on the Lunar Lander challenge; I assume no one won the competition. All the preliminary reports described mishaps of various sorts.

The most exciting competition was the climbing challenge. Competitors had to build a climber that could scale a specified ribbon at a speed of at least 10 meters per second and then descend. Power had to be beamed to the climbers from the ground. Winds were much higher than expected in Las Cruces, and as a result, the competition tether swung back and forth, moving many climbers in and out of their power beams. Two of the teams that tried to qualify used microwaves to beam power to their climbers; the Airport announced on the day of competition that they wouldn't allow microwave-based systems. A Spanish team shipped its climber by UPS, but it was lost en route. Three other teams were scheduled to compete, but various mechanical problems and mishaps got in their way. That leaves 6 teams that managed to qualify:

Climber Qualifiers

  • USST (University of Saskatchewan Space Design Team) climbed in 58 seconds. They didn't manage to descend in the alloted time, and there was some question as to the actual height of the climb. If it was 50 meters, they didn't reach the speed required to win the challenge purse of $200K, while if it was 60 meters, they would have been fast enough.
  • LiteWon from Westmont High School in Campbell, California first climbed the tether in 5:31, and in a repeat attempt managed to improve that to 2:02. Their (controlled!) descent was very fast at about 10 seconds.
  • TurboCrawler from Germany had problems with their controller in the final climb, then managed the climb in 3:27. They used two spotlights totaling 30 kilowatts to power their climber.
  • Climber 1 from the University of Michigan ascended in 6:40; they were fast enough when the climber was illuminated, but the wind kept moving the climber out of the beam.
  • Kansas City Space Pirates climbed more than half way, but couldn't complete the climb due to the winds.
  • Snowstar from the University of British Columbia couldn't get a grip on the tether during the competition.

tether qualifiers

Four teams entered the tether competition. They were required to submit a 2 meter tether that couldn't weigh more than two grams. They would then compete in pairs to see which could hold a heavier weight. The four qualifiers were Astroaraneae, UBC, Centaurus Aerospace and Bryan Laubscher. All the competitors met the weight limit, but only Astroaraneae met the 2 meter minimum length, so the others were all disqualified.

In a series of friendly competitions, UBC outlasted Bryan's submission at 531 pounds; UBC broke at 880 pounds while Centaurus Aerospace survived that weight. Astroaraneae held until 1335.9 pounds against the house entry. The house tether was allowed to weigh three grams and was the target to beat in order to win the $200k in prize money. The organizers then attempted to break the house tether (in order to provide a benchmark for next year's competition), but the pulling machine jammed.

The space elevator blog had play-by-play on the competitions. The Space Elevator Reference also had coverage that's worth looking at. has quite a few videos (mostly promos). YouTube has videos of some of the qualifying runs. I didn't find any of the actual competition runs. They also have other videos related to the x-prize events. personalspaceflight has more blogging, focusing on the rocketry side.

Filed in:

Saturday, October 21, 2006

Charles Stross: The Clan Corporate

Charles Stross's The Clan Corporate is the first part of a new story in the Merchant Princes series. (See my review of The Family Trade and The Hidden Family.) This segment is reasonably well told, but very inconclusive. Clan opens up some new issues, but doesn't resolve anything. In this installment, Miriam Beckstein finds herself unable to get much done because all the powerful people around her distrust her motivations. She manages to escape their clutches at one point, but digs herself a deeper hole, and ends up even more constrained in her actions. In many ways the subplot dealing with Mike Fleming, and his work helping to unravel the mystery from the government's point of view, was the most interesting part of the book. The government sets up a cross-agency task force, and manages to learn a bit about the abilities of the family members, and even make some initial incursions into their territory. There are connections to the modern anti-terrorist complex, but they aren't explored in any depth.

I didn't notice anything here that would make the book relevant to the Prometheus awards. The writing is good, but since it's obviously the first part of a longer work, and leaves everything up in the air, it's unlikely to attract attention for the writing. And there's no hint of libertarianism; even the explorations of the development of commerce, and the comparisons of the economic effects of different approaches that I liked so much in Hidden Family are missing since Miriam spends nearly all her time stuck in world 2. I recommend continuing to follow the series, but this book isn't much more than a bridge to the next part.

Joseph Mazur: Euclid in the Rainforest

I received a copy of Joseph Mazur's Euclid in the Rainforest as a going away gift when I left CommerceNet. Will figured out that I like reading books of puzzles, logic, and math.

It's a rambling approach to a lot of interesting byways in math and logic. The style switches between travelogue and reminiscence, always with in-depth side journeys into history of math and various logic puzzles.

The first section is presented as the story of the narrator's trip through the Amazon basin, running into various characters with interesting problems that require logical reasoning and an awareness of math or geometry to solve. How powerful a winch do you need to pull a two-ton truck up a 45 ° slope? In explaining the answer, we see the first of many proofs of the Pythagorean Theorum. At this point, the narrator also starts showing us how to disentangle syllogisms. His example asks what you can conclude from the following statements?

  1. Atoms that are not radioactive are always unexcitable.
  2. Heavy atoms have strong bonds.
  3. Uranium is tasteless.
  4. No radioactive atom is easy to swallow.
  5. No atom that is not strong is tasteless.
  6. All atoms are excitable except uranium.
He goes on to talk about Infinity and Probability in engaging ways. All in all, an enjoyable book. Not very deep; not much that was new, but the material was presented in a way that kept up my interest.

Filed in:

Saturday, October 07, 2006

James Surowiecki: The Wisdom of Crowds

I saw James Surowiecki give a talk at a prediction market conference before I read his book, The Wisdom of Crowds . The talk was interesting and he demonstrated a strong grasp of the market and an appreciation for its value. He engaged the audience in a discussion on how Prediction Markets could be made to work better. I was quite impressed, but I assumed it was mostly unrelated to the book. I was wrong.

Surowiecki has done a commendable job of presenting broad coverage of the many ways in which crowds can work together to produce results that exceed those of individuals or poorly coordinated groups. He also explains the strengths and limitations of the effect--when groups can be expected to outperform individuals, and when will they underperform.

The book starts with some simple examples: taking the average of many people's uninformed guesses can work very well when everyone has a reasonable idea of the plausible range of values under consideration. Crowds do quite well at guessing the weight of a person or the number of jelly beans in a bowl. They don't do as well at guessing the distance to the moon.

Surowiecki explores different institutions we use to make decisions and predictions looking for rules of thumb about what makes each work well and what problems each is suited for. His subjects range from markets and corporations to democracy, traffic and science.

It's hard to find this summed up in one place in the book, but Surowiecki's thesis is that there are institutions that can do a good job of getting good decisions and predictions from people even when none of the participants could do as well, and that the important factors are that the institution is appropriate for the task, and the crowd is diverse and independent. When institutions encourage or allow people to follow one another, the crowd's independence is undermined, and the outcome can be swayed by strong individuals. If the participants all come from the same background, or share a point of view (because of the way they were selected, or because the institutional setting channeled their views) then you don't get robust results.

Surowiecki comes across as a strong proponent of markets in the book, but he also describes situations that are more suited for other tools. He lauds open source software development, and also points out that decentralized coordination without prices can work well for managing traffic or leading scientists to expand the frontiers or our knowledge. An enjoyable read with much to recommend it.

Sunday, October 01, 2006

Appreciation-indexed Mortgages

A recent article in the Economist talked about new loan products that are currently available only overseas (Britain and Switzerland) that embody one of the ideas that Robert Shiller talked about in The New Financial Order. Shiller pointed out that the financial industry has learned, over the last few decades, how to package up risks in many new ways to allow companies and people who are subject to those risks to sell them to other parties in a way that can be beneficial to both. Insurance (health, employment, hazard, auto) were the first examples; recent innovations like currency swaps help companies with too much exposure to currency fluctuations trade it away to others who benefit from it.

Shiller's proposals in The New Financial Order were that these innovations should be made more accessible to retail buyers to give us the ability to hedge away our excessive exposure to housing market fluctuations, trade cycles, and the risk that the industry we've accumulated a lifetime of experience in will go the way of the buggy whip. Specifically focusing on real estate, Shiller noted that while most home buyers benefit from the long-term increase in home values, a significant number of people are hurt when prices fall for short periods of times or in particular locations. Most of us would benefit by selling off a little of the upside exposure in exchange for some down-side protection.

Shiller has focused some of his effort recently in finding ways to make these kinds of markets more likely and more accessible. The introduction of housing price indexes this year at CME and HedgeStreet, among other places, is possible because Shiller helped develop reliable indexes that major players in the real estate market agree are representative of actual price changes over time. The market in these securities isn't yet liquid enough for it to make sense for an average home owner to trade away their specific exposure. Besides the markets are unfamiliar enough that few would find the markets.

The new products that the Economist talked about are designed to bridge this gap. The right time to talk to home buyers about their risks and to get them to think about the trade-offs is when they're buying or refinancing. The new products are a kind of mortgage that links the value of the loan to regional housing prices. If the market falls locally, the bank forgives a chunk of the loan; if the market rises, the bank shares the gain. This seems ideal for potential buyers who are worried about the real estate bubble. And it seems perfectly reasonable for banks and home owners to share the risk; home owners are overexposed, and banks underexposed to residential real estate. The banks are positioned to take advantage of the long term growth, and risk-averse home buyers can lay off some of their risk without giving up all chance of winning through appreciation. So far, these products are not available in the US. OTOH, according to the Economist, British home buyers aren't taking banks up on the offer yet.

Filed in:

Tuesday, September 26, 2006

Tools I use: Nostalgy for Thunderbird

I haven't posted anything in this category for a while, but I recently found an extension for Thunderbird that has really made my day. I've noticed for a while that re-filing messages is too mouse-oriented for my tastes, and have hoped to find a solution.

You see, I'm a keyboard-oriented person. I use emacs as my preferred text editor, but even when I use Word I learn all the keyboard shortcuts, and customize the interface to add more shortcuts so I can do as much as possible from the keyboard. I know the keyboard shortcuts for most of the programs I use regularly, and have little trouble keeping them straight between different programs.

I often say "Bill (meaning Microsoft) thinks people feel productive when they're moving the mouse, and he wants you to feel productive, so he makes sure you have to move the mouse to get anything done." The Mac has unfortunately followed in this direction more and more recently with many monologue boxes (my term for a confirmation window with only one choice.) And too many of the dialogue boxes won't let you tab to a different choice: you have to move the mouse. I thought this went against usability guidelines, but it increases nonetheless.

So anyway, back to Thunderbird. In order to refile messages in off-the-shelf Thunderbird, you can either navigate menus manually, or drag the message to a folder. If you have a large folder tree (as I do), this can be a serious hassle. I haven't figured out how to invoke the menu tree from the keyboard, since the menu is hierarchical, and AFAICT you can only add a shortcut for a leaf of the menu hierarchy. That means invoking the menubar (minimum 5 keystrokes or a mouse movement) or a context menu (mouse movement) then navigating the menus (mouse movement at each level, sub-menus might be on the right or the left; keyboard requires moving up and down with arrow keys, since individual menu items don't have live shortcuts).

Steve Putz's mail reader, Babar (only available in the version of Smalltalk that was used at PARC; the user community never exceeded 50) had a short menu of the most recently used folders that you could reach quickly for filing or visiting. The other alternative is a quick textual search from the keyboard. The Emacs front-end for mh (which I still use regularly to peruse my filtered junk mail) supports this.

Now there's a new (first release was in May) extension for Thunderbird that supports filing from the keyboard. It's called Nostalgy, and it was written by Alain Frisch. Hit a key: "s" (for save?) to refile, "c" to copy, and you can then type a regular expression while the applicable list is narrowed down as you type. And the most recently used folder is immediately accessible by capitalizing the command key, so filing to the same folder you just used (which is quite common) is just "S".

It's wonderful!

Filed in:

Wednesday, September 20, 2006

Continuous Outcomes: Bands, Ladders, and Scaled Claims

The most common prediction markets we see are binary markets; those with two possible outcomes. Will Candidate X be reelected, which team will win a sports contest, will so-and-so be convicted, etc. The next most common is determining the outcome from a list of possibilities known before the event: winner of a multi-candidate election, the World Cup, or the Super Bowl. This post talks about another kind of market: predicting the value of a continuous variable, as in how much snowfall will NYC get this winter, how many cases of flu will there be in Iowa next month, the level of a company's sales, the value of the Dow-Jones Industrial Average. Even "When will social security reform be passed?" and "When will the new product be shipped?" can be expressed in this format: the date of occurrence is the predicted value.

There are three common approaches to predicting continuous variables. I call them price bands, price ladders, and scaled claims. In the price ladder representation, a series of securities is offered, and each one is phrased as "the value will be lower than X". A series of securities with different values of X can cover all the possibilities (as long as the sequence is capped by a security that includes all other values). Price bands phrase each security's claim as "the value will be between X and Y". (These need to be capped on both ends by "W or less" and "Z or more".) Scaled claims represent the same kind of question with a single security that pays off a variable amount determined by the outcome.

Bands and Ladders are duals: any bet that can be made in one system can be made in the other with a structured bet. If you believe that the outcome will be between X and Y, and the market offers only ladders, then you buy X and sell Y. If the market offers bands and you want to bet that the price will be above (or below) some value, you buy all the securities at or above that value. Scaled claims don't offer these options.

The question to answer in designing these markets is which approach is more convenient to the trader, and more amenable to analysis. (If the output of these markets are predictions, then we want usable predictions.) In order to offer markets in continuous outcomes, the market operator has to decide what a plausible range of outcomes would be, and decide what possibilities are reasonable choices. That helps determine how many bands and how wide they should be. If the consensus view expects a value to be between 1200 and 1500, someone offering bands or ladders wants to set things up so there are choices ranging from 1100 to 1600, with some choices within the consensus forecast. With variable payouts, if the value is going to change very much, the outcomes of interest should take up as much of the probability space as possible. On FX (where design of the securities is a subject of public discussion) there is often a discussion about what outcomes are most likely in order to choose a wide active range that the price will move around in. If the market operator can't tell where the contentious issue will lie, they are forced to choose between many securities, most of which will be uninterested, or a few wide bands, and risk that there's little disagreement on the outcome (to the resolution the claim provides) for much of the claim's lifetime.

It isn't necessary that the bands be of equal sizes. When the magnitude of the outcome is highly variable, a log scale can be used. On FX, log scales are often considered for death tolls for epidemics or other disasters, though I couldn't find any claims that ended up using it. It can also make sense to have narrower bands in the region of the most likely outcomes, and wider bands further away. Robin Hanson suggested another approach: split up the bands as the consensus changes. In order to be fair to the traders, the securities should cover all the possibilities (i.e. no gaps or excluded ranges; most easily done by having the lowest range be "< X" and the highest be "> Y".) If one band has a high percentage of the interest, split it into sub-ranges, and give each investor who owns shares in the range being split a corresponding number of each of the new shares. Reverse splits can't be done cleanly, so it's important to not split too soon or too often if the UI doesn't handle lots of claims well.

Another weakness of scaled claims is that the market only produces one price, so you can't tell when investors' opinions are bimodal. For instance, if many people think that there will either be a huge epidemic or a low effect, with medium epidemics being unlikely.

The market will be more liquid if people can express their view without having to buy multiple securities to cover their expectations. So to some extent the best choice depends on whether people are more likely to think they know the most likely value, or more likely to think "the value will be at least X" (or "at most X"). I think people usually find it easier to decide on a maximum or minimum value than a most-likely range. On the other hand, price bands make it easier to understand the market's forecast. The calculations of implied prices based on the prices of puts and calls in the stock market are complicated, and deriving a prediction from prices of laddered securities should be just as involved.

If the securities are built as price bands, the approach to improving liquidity in N-way markets that I described in a previous article would be applicable. The same trick doesn't apply to laddered securities, since those assets can't be expressed as linear combinations of one another.

TradeSports had a market in snowfall in New York City last winter, and many of their financial bets are for continuous variables. They also offered their bets on capture of Saddam Hussein and Osama bin Laden as well as the date of passage of Social Security reform as a series of deadline dates, which have the same semantics as ladders.

FX has scaled claims, and this form is also used by IEM for judging the popular vote (as opposed to the winner-take-all market based on the elector college outcome.) IEM's local flu markets (currently inactive?) were done as price bands as well (different colors represented different observed rates of flu.)

HedgeStreet is currently using overlapping bands in some of their markets (50-60, 55-65, 60-70). This gives more choices to the customer, but that means it splits the liquidity. Doubling the number of outcomes investors have to pay attention to cuts liquidity approximately in half. I can't think of an advantage of this choice. HedgeStreet also has price bands that aren't capped, so it's possible that none of the bands will include the outcome.

Inkling's and CrowdIQ's markets ask you to pick a single winner from a list. These can be used for either price bands or ladders, and some markets there are being done that way.

NewsFutures isn't currently offering any contracts on continuous variables that I could find. I don't remember any in the past either. HSX's stocks are open-ended continuous payout securities. Yahoo! Tech Buzz has discreet outcomes with payouts proportinal to the search measure.

Previous articles in this series:

Tuesday, September 19, 2006

Kim Stanley Robinson: Forty Signs of Rain

Kim Stanley Robinson's Forty Signs of Rain is a lightweight eco-thriller. The principal characters are climate scientists working in Washington DC and San Diego while the climate gradually worsens. The time-scales are short enough that it isn't plausible to treat the increasing storms as more than anecdotes, but they're big enough (the mall in Washington DC floods because of a combination of high tides and storms feeding both watersheds that drain into the Potomac) to be striking to most readers. The kind of striking image that leads people to reasoning from fictional evidence.

But the story is engaging, the characters are plausible examples of the stereotypical focused scientist. Many loose ends are left hanging to the sequels. The only science fiction seems to be the speed of onset of global warming. There are several scenes involving start-up biotech firms, and the politics of recruiting. There's also a significant amount of time spent in Washington, both in giving grants to scientists, and lobbying congress to do something about the state of the world. It all rings reasonably true, and will be familiar to anyone who has worked in technology, academy, or government.

One of the characters is an ex-rock climber. I didn't find anything discordant in any of the climbing scenes. I've even been involved in rappelling into a building atrium (not solo, though) and found that description plausible as well.

Fun, but not deep. The whole purpose seemed to be to show us a powerful vignette of the effects of global warming, and I'll admit that the image is powerful.

Sunday, September 17, 2006

Jaffe & Lerner: Innovation and Its Discontents

Innovation and Its Discontents by Adam Jaffe and Josh Lerner is a level-headed appraisal of the problems with the current patent system in the US, accompanied by some common sense proposals to address them. In discussions of problems with the patent system, I have said for a while that the problem is that the patent office is issuing bad patents, and argued that people should be careful not to conclude from the effects of these bad patents that patents are a bad idea.

Jaffe & Lerner seem to agree with this point of view. They describe the problems of the patent office as being poor incentive structure in the patent office, compounded by procedural rules in the patent courts that presume that patents are being issued competently. The budget of the patent office has been cut, and the incentives on individual patent clerks push toward easy approval; there is no cost to issuing too many patents, and a high cost to spending extra time reviewing. Once patents get to the patent court, the rules are stacked in favor of the patent holder by a presumption that the patent review was performed competently. All this combines to produce a glut of lousy patents that are hard to attack.

Patents that are easy to earn and hard to dislodge can be a serious drain on innovation, because inventors and innovators necessarily re-use previous ideas in developing new solutions. When only major innovations gain the right to exclude rivals, the incentives ought to encourage inventors to describe their breakthroughs so others can exploit them after the exclusion period, and minor advances don't turn into roadblocks.

One somewhat surprising point that Jaffe and Lerner make quite clearly is that the patent system doesn't have to screen out worthless patents perfectly. As long as it's possible to overturn undeserved patents, allowing too many patents wouldn't be a problem, since most patents don't lead to commercial products. If it's not too expensive to defend against a claim of infringement based on an weak patent, then the system doesn't have to prevent all bad patents. If we can't remove the strong presumption the current system makes in favor of patent holders, the only resolutions would be to look for reforms that prevent bad patents from being issued or to advocate the complete repeal of patents.

Their proposals are much more conservative than this, and if they are adopted relatively intact, they seem to have a fair chance of improving the situation greatly.

Their proposal has three parts: 1) make it possible for people to provide relevant prior art before a patent is granted without precluding the prior art from being used later in court (the current system assumes that prior art that the patent office knew about before granting a patent was correctly considered by the examiner, and so it is excluded from later use in court challenges.) 2) provide escalating levels of review and challenge so that weak patents can be challenged cheaply, and important patents get an appropriate level of review without too strong a presumption about the outcome, and 3) drop the option of jury trials, and move to a system of judges and expert special masters. (Juries seldom understand the issues in a patent case, and the presumptions they are instructed to make lead them to decide for the patent holder whenever they are confused.)

Another argument that the authors make persuasively is that it would be a mistake to advocate different rules for patents in different fields. First, this would have the effect of pushing patenters to couch their patents in whatever terms give them the most advantage. The authors show that this has gone on with respect to the different treatment that business method, pharmaceutical, and software patents get currently. Secondly, if the rules are variable across fields, every lobby will have an incentive to make a case that they are special in some way. We'll all be better off if we can get simple across-the-board reforms implemented that limit patents to real breakthroughs, and make it possible for innovators to proceed without being obstructed by patents on minutiae well-known techniques.

I have no idea what the chances are that these reforms might be considered in the current political environment. It appears that lawyers, as a coalition, like the current system, but it's not clear why any innovative company would prefer it, even if they have a large patent portfolio at present.

Tuesday, September 12, 2006

Zocalo 2006.5 released: Installer for Windows

I've released a new version of the Zocalo Prediction Market Toolkit. Zocalo now has an installer for Windows. This should make it simple to install the prediction market version on Windows platforms. If you've already installed, there's nothing else worth upgrading for, but if you were waiting for a simpler install on Windows, this is it.

The 2006.3 release was the first to support deployment on Windows, but it was packaged as a .zip file with manual instructions. I got a couple of requests for a windows-style installer, so I figured out how to build one using the open source NSIS package. It took me a week and a half to get everything right, but I think the result is worth it. It's a professional looking installer that prompts for some configuration parameters, and verifies their values before inserting them in a configuration file.

I didn't make an installer for the experiment version. If you've been holding off installing that because of a perception that installing on Windows from a zip file was going to be too hard, let me know. I'd guess that building another installer will take a few days now that I've figured out all the quirks of NSIS that mattered for Zocalo.

Saturday, September 09, 2006

Michael Lockwood: The Labyrinth of Time

Michael Lockwood's The Labyrinth of Time starts out as an explanation of the physical structure of time, but morphs gradually into a treatise on anything Lockwood is interested in that can be remotely connected to time. Lockwood's interests include time travel, quantum electrodynamics, black holes, and quantum gravity. His exposition is better in areas that are clearly physics, and worse on things like the meaning of causality and time travel.

Lockwood starts by presenting two basic ways of thinking about time. In one, the past is fixed and the future is mere possibility turning into the unchanging past as we move past it. The other says that both the past and future are fixed, our point of view is called the present, and as time passes, we get to watch a narrow slice of the fixed 4-dimensional reality pass by. Most of physics works equally well whichever direction time moves, and Lockwood wants to explain why the past and the future seem different to us. We remember the past and not the future. In the end, this paradox wasn't cleared up for me.

Lockwood does present some good new visualizations that clarify space-time and various "problems" with reasoning about the speed of light. His presentation of simultaneity, using a moving train car with two different observers was remarkably clear, as was his explanation of the twin paradox using two unaccelerated paths through a hypothetical cylindrically connected space-time. The presentations of the physics of black holes (and how to use a black hole as a power source) were also good, but I didn't get much out of his approaches to string theory or quantum gravity. I think part of it was that the presentations were more hurried nearer the end of the book; he did better when he took the time to explain the basic concepts clearly first before getting to the advanced ideas. I suspect that except for people steeped in these later areas, his explanations will come up short.

Lockwood kept coming back to time travel paradoxes, without ultimately resolving the issue. Apparently the standard interpretations of physics say that paradoxes aren't allowed, and the open question is what mechanism the universe will use to prevent them. Apparently, if you don't believe in many worlds, you are forced to believe that there is only one past and a single future that will eventualy unfold. This doesn't allow causal loops (even though the fundamental equations seems to countenance them) much less time travellers acting in the past to prevent their known futures from (re-)occuring. I think the underlying theories force you to accept a single past even if you don't go for many worlds. So, whether the past becomes fixed as we journey with the moving present, or our viewpoint moves across an already fixed landscape, you're stuck.

I don't see how these arguments would convince anyone that the universe would act to prevent closed time-like loops. Until we see events conspiring to prevent causal loops, I'm happier reconciling the claim that physics allows time travel by accepting that space-time may be a causal spiral than expecting the universe to allow time travel while conspiring subtly to prevent someone from killing her grandfather. But as Norm Hardy pointed out, some of the phyisicists who seem to understand the equations say that the equations force you to believe in an invariant past. And in the end, Einstein's ability to predict, based on the equations, a lot of what we now believe and that experiments have confirmed, says that that's a strong argument.

Peter McCluskey also wrote about Labyrinth.

Filed in:

Thursday, August 31, 2006

Dynamically Transferrable Proxies

There has been a lot of theoretical work on voting, starting with Kenneth Arrow's proof that it's impossible for any consistent vote counting procedure to satisfy a short list of desirable criteria he provided. Over the years since then, people have proposed a variety of counting procedures that make different trade-offs among them. The LFS uses a variant of the preferential voting system to allow members to rank candidates because nominees must get more votes than NOTA to win. This approach means we don't have run-off votes if there isn't a single winner in the first round of voting.

Voting isn't a great way to solve problems, but it may be the best we have in situations in which people want to work together and choices must be made about how to proceed. This is often the case when people want to cooperate (volunteer groups, owner's associations, or standards organizations). There are some actions that work best when performed in concert, so people often agree to follow a group's decisions. (With some limits on what the group's purview is, or the right to withdraw if the decisions are unacceptable.)

Wikipedia has several articles on different vote counting procedures: Borda count, Condorcet (and variants) and several others. Each system makes a different trade-off, some ensuring that the winner is hated by the fewest electors, other that the winner is preferred by more electors over each opponent, preferred more strongly by the most voters, and so on.

The other aspect of electoral systems is that for many purposes, we prefer to elect representatives who will spend more time discussing issues in detail and make a proposal for all to consider or decide among the alternatives themselves. Most voters, most of the time aren't willing to pay much attention. (Economists have shown why it is irrational for voters to become informed if the electorate is large, and why narrow interests control the issues that pertain to them.) In organizations at all scales most of the time, most people have better things to do with their time and attention.

There are as many ways to select representative panels as to count votes. Representatives in the US Congress are chosen by a majority in a single district. Senators are chosen in the same way, even though there are two of them. The problem that PR addresses is that if 51% of the people in each district vote Orange, and 49% vote Purple, then no Purple Legislators are elected. PR counts votes over larger districts, and selects winners in proportion to the votes cast. In elections for corporate Boards of Directors, any voter can usually cast all their votes for a single candidate, ensuring that someone who owns 20% of the shares can select a Director (one of five seats) even if everyone else votes for the same slate of 5. (If it were congressional seats, each representative for the larger group would win with 80% of the vote.) Many European countries' parliaments are selected using Proportional Representation (PR), so the representatives are chosen in proportion to the sum of the electors' votes. (Notice that PR also defuses the pressures for gerrymandering.)

The choice of vote counting mechanism and how multiple representatives are chosen has a large effect on the make-up of the elected body. The American approach naturally limits the power of third parties. Since the only people who get elected have more than 50% of the votes, parties that can't at least occasionally surpass that mark won't ever hold office. That naturally leads to a two party system. If any party has significantly less than half the electorate as its base, then it can increase its chances of being elected by joining with another group. If a faction isn't within the margin of power of its party, the party can ignore their demands without hurting their chance of being elected, so the faction can increase its influence by threatening to join a different party that will pay more attention to their voter's wishes. Sometimes, a faction can't credibly threaten to bolt the party (because of historical interactions with the other party, for instance) and so their influence will wane.

In PR systems, there is often a floor: parties with less than 5% of the vote (for example) don't get a seat. Any faction larger than that can get a seat, and be part of the decision-making body. As long as the voting rules in the representative body allow their voice to be effective, that's enough to keep the factions alive, and ensure that they continue to distinguish their positions from their rivals.

All of this has an effect on how interest groups organize and persist. Most of the time, single-issue groups can't get elected to general legislatures. Most voters have too many distinct interests to allow one interest to represent them. Even with the lower floor in PR systems, groups representing e.g. labor have to take a position on most of the major issues, and hope to attract voters who match their position on more issues than their rivals. In the American system, single-issue groups find they have to be allied with one major party or the other (think of the NRA, the NAACP, either side of the abortion question, labor, etc.).

Since I'm a libertarian, I'm always a minority and my vote is of no interest to the major parties. For a long time, the Libertarian Party had a stated goal of being the margin of victory between the major party candidates in as many local elections as possible. The theory was that at that point, the weaker party would have a reason to try to co-opt some platform planks in order to gain converts. Every time they do that, that party would be moving closer to libertarian positions, which is useful given that libertarians aren't going to be elected in numbers anytime soon.

Years ago, I had an idea for a different way of organizing representative bodies that would fundamentally change the dynamic. It ought to energize all the minor factions, and could make it possible for representative bodies to reach decisions that matched the voters' views much more often by ensuring that elected representatives don't have to be a compromise between positions on a large number of issues. There is, of course, an argument to be made that satisfying more voters more often will lead to the government constantly robbing different minorities in order to pay off constantly changing majorities. I think the right response to that is to try the mechanism out in voluntary organizations rather than governmental bodies.

I call my proposal "dynamically transferable proxies". The idea is that representatives aren't elected, they merely carry proxies from voters who currently back them. A representative's vote in the forum is the sum of the individuals they currently represent. Voters can choose new representatives as often as they want. A proxy-holder can transitively assign all the proxies they currently hold to any other representative. Voters and assigners can revoke their assignments anytime they like as well.

In order to have productive negotiations about what alternatives to consider, those currently representing the largest constituencies should have a place (virtual, if necessary) to talk. I think it makes sense to choose a number (20, 100, 435) and enable that number of the largest proxy-holders to convene. If they agree on an agenda, and set particular times to discuss specific issues, the voters can dynamically switch their proxies to people or groups that best represent them on that topic. If gun control is on the agenda, then the NRA will have some number of proxies from their dedicated members, and Handgun Control, Inc. will have proxies from their constituents. More importantly, everyone else, somewhere in the middle, will assign their proxy to someone who can articulate a reasonable compromise, or someone who can plausibly claim to be willing to listen to arguments. Major parties can attempt to form coalitions, but voters can defect, one issue at a time.

In this context, some people would aim to be representatives, but many more would find a niche as proxy-holders who would notify constituents when an upcoming agenda item affects them, and commit to assign the proxy to someone who would represent their views effectively. (That's the right role for the NRA and the right-to-life/choice groups. They don't need to be present on the floor 99% of the time, but when contentious issues are being discussed, it would be better to separate representatives of extremists from representatives of moderates.)

I think this approach could have a moderating effect on politics. But I don't think there's any point in trying to promote it as a replacement for currently existing governments. OTOH, connectivity has increased enough that it might make sense for some groups. Some of the criteria for a good fit include: highly-connected constituency, broad range of issues to make decisions about, and a need to make collective decisions. I think it would work as well for a voluntary group as for a political subdivision.

I have been an officer of a couple of volunteer organizations, but the range of issues hasn't been wide enough for this kind of process to be worthwhile. I've occasionally fantasized that this would be the right way to run a large colony once we start exploring space or a starship. What organizations would it make sense for? Does 2nd Life have a representative governing body?

Thanks to Dan Reeves (currently at Yahoo) for encouraging me to write this idea up.

Wednesday, August 30, 2006

Insider Trading

Stephen Roman responds to a post from Professor Bainbridge, which was a response to Wolfers and Zitzewitz' question about Manipulation in their 5 Open Questions About Prediction Markets paper. That paper didn't bring up insider trading; the manipulation section focused on attempts to change outcomes by changing prices. Bainbridge's approach is to ask how bad insider trading could be since firms don't seem to have prohibited it on their own.

Roman tries to address the question of whether there is ever a reason for a prediction market exchange to ban insider trading. His answer is that it's never in the interest of the exchange or any of the participants to refuse admission to anyone. This argument leaves out an important part of the argument, though. The reason people keep asking the question is that they thought they learned from stock markets that it was important to prevent it. If stock markets are right to ban insiders from trading, what's the difference, and what does that imply for prediction markets?

The reason insiders are banned from trading on stock markets isn't because of their effect on prices or information, it's because having something to gain from the company's losses might give them reason to act against the company's interests. (In the insurance field, this is called "moral hazard".) The reason the major stock markets publicize and help enforce the ban is because the companies are their customers, and the exchanges and the companies share an interest in making the market appear to be free of that influence.

Another rationale for the ban is to prevent insiders from profiting from early access to information, but I think that is a secondary concern. If the moral hazard created by allowing people to trade against their employer's interests weren't present, it would be much harder to figure out where to draw the line to say who has unfair early access to information.

In the absence of such rules from the exchange, it would still make sense for companies to make and enforce explicit rules prohibiting employees from taking positions against the interests of companies when they have privileged access to information. This might clearly apply to some specific prediction market contracts, but companies would probably find it more effective to make a general rule than to try to maintain an explicit list of forbidden contracts or topics.

In the end, I agree with Stephen that prediction markets shouldn't ban insider trading. But it's not because all insider trading is benign, it's because insider trading is a net benefit in these markets, and the conflicts of interest can be handled closer to where they occur. And the answer to Bainbridge's question is that companies haven't done anything voluntarily because the stock exchanges already prohibit it. I think you have to look somewhere other than publicly traded companies to find examples of voluntary enforcement.

Tuesday, August 29, 2006

2006 Prometheus Awards

The Libertarian Futurist Society announced this years' winners of the Prometheus Awards on Friday at the 64th World Science Fiction Convention.

Ken MacLeod won this year's Prometheus award for Best Novel for Learning the World.

Alan Moore and David Lloyd won the Award for Best Classic Fiction (the Hall of Fame award) for their graphic novel "V for Vendetta". David Lloyd accepted the award.

Joss Whedon's film, Serenity, won a Special Prometheus Award. The award's dedication read "To 'Serenity', writer-director Joss Whedon's fun-loving and pro-freedom movie that portrays resistance fighters struggling against oppressive collectivism (based on the unfortunately short-lived TV series Firefly)." Morena Baccarin, who plays Inara in the series, accepted the award.

Wally Conger had the details on who accepted the awards; Anders Monsen linked to Wally's post.

Saturday, August 26, 2006

John Varley: Red Lightning

John Varley's Red Lightning is a pretty lightweight example of the genre of revolution on the martian colony. Actually, the revolution is almost an afterthought, and the rest is a simple juvenile adventure. Our hero is Ray Garcia-Strickland, a teenager growing up on Mars, the son of two of the pioneers, and close confidant of Jubal Broussard, the eccentric inventor who came up with the device that makes space travel affordable. The first half of the story is an adventure through tsunami-torn Florida after an unidentified, near-light-speed object tears through the Gulf of Mexico. In the second half, Mars is repeatedly invaded and subdued by unidentified mercenaries who are hoping to capture Jubal and gain control of his inventions. In the end, Ray, Jubal, and friends stand off the mercenaries, and declare independence for Mars based on their control of Jubal's inventions, which no one else has been able to replicate.

I found the story to be weak, contrived, and derivative. The technology is magic, and the fact that it's all-powerful (provides cheap, plentiful energy, and it can make Vingean bobbles, too!) and requires Jubal's personal touch to manufacture renders it an unbeatable tool. The bad guys are one-dimensional: faceless, incompetent in low-gee combat, ruthless torturers, and ordered around by unseen forces.

Red Lightning has already been nominated for next year's Prometheus award. Varley's a good writer: his The Golden Globe won a Prometheus award, and the story "The Persistence of Vision" won Hugo and Nebula Awards. The story has enough anti-authoritarian elements to qualify as (weakly) libertarian: the government does a lousy job managing after the tsunami (Varley swears in an afterword that that part was conceived before the December 2004 Indian Ocean tsunami and the Katrina devastation), and the mercenaries are either backed or tolerated by governments on Earth. The story is a fun read; there's just no depth to it. The two halves of the book are unrelated stories; the trip to Florida gives us a chance to see the devastation, but we see mere snippets of heroism and villainy; the same is true of Mars under occupation. The declaration of Martian independence at the end is managed without the awareness or help of the residents of Mars, even though they have plenty of grievance after their occupation by the anonymous mercenaries.

Filed in:

Friday, August 25, 2006

Elizabeth Moon: Engaging the Enemy

Elizabeth Moon's Engaging the Enemy has been recommended for the Prometheus award, as a follow-up to her previous Marque and Reprisal. I think Marque had a little more to recommend it as libertarian, but both are fine space adventure yarns.

The heroine is Kylara Vatta, a young woman, and offspring of an established interstellar trading clan. Ky dropped out of space academy after a scandal, and was just trying to get started on a career as a merchant when most of her clan was killed in a coordinated series of raids by parties unknown. In the previous book, Ky received a letter of Marque from one planetary government, authorizing her to attack and capture pirate ships. This stretched the bounds of what she'd been brought up to believe was honorable behavior, but these were extraordinary circumstances. Happening upon an outcast from the Vatta clan who appears to be operating as a pirate, Ky captured his ship and killed the pirate. Now, in Engaging the Enemy, she has an armed ship to use to try to chase down those responsible for her family's deaths, and start rebuilding the trading empire.

The rebuilding is mostly assigned to her cousin, but organizing the independent privateers into a fighting force that can take on the apparently organized pirates and free the shipping lanes for peaceful commerce is left to Ky. She's young enough that once she's gathered a group of independent captains together, someone else asserts seniority and takes command. But we can tell from the beginning that Ky will make a better commander, and she wisely decides to work on her cooperation skills while waiting to see if her battlefield experience and tactical smarts are evident to the others. It doesn't take long for her qualifications to appear.

The subplots and side-stories provide background about the family and build up some characters who may turn out to have important roles later, but the heart of this book is Ky's political maneuvering and the actual space battles. There's a touch of right to self-defense, but it's layered with revenge motive, so it's hard to call that a libertarian thread. The battles and intrigue are well-written, but there's not much of deep substance here. It's a solid novel, with interesting characters and fast-paced action. There's no deeper meaning or overarching significance to events, so if you're satisfied with simple interstellar adventure stories, you'll find this is a good read.

Filed in:

Monday, August 21, 2006

George R. R. Martin: A Storm of Swords

George R. R. Martin's A Storm of Swords is the third book in his series A Song of Ice and Fire. I've been enjoying this series for the character development, the span of events, and the complexity of the action. This installment is almost 1200 pages long, but that wasn't a detriment. Martin is telling a story that fills the pages and continues to hold my interest.

The series covers seven interlocking medieval kingdoms that have recently lost the one that brought relative peace by conquering and uniting them all. Every house is jockeying for power, to hold the realm together, to hold onto a kingdom, or to hold a stable enough seat to start over. Each house's character is driven by the position they hold and the relationships they've relied on. The power brokers in each house are usually the ones who embody the family character, leaving lesser characters to have widely varying motives and approaches.

In the previous books, we have met quite a few characters, but events seem to turn around the house of Stark, the widow of Lord Eddard Stark and their seven children, (now ranging in age from Rickon, who is 4, to Rob, who is old enough to rule). This time around, the Starks are the focus of the Storm in the title. Several Starks fall, but even after they seem to have died or gotten lost, they continue to play a role.

Early in the book, Martin's round-robin telling of events from a different point of view in each chapter had me mentally reviewing the status of each of the Starks (and their major opponents) each time I put the book down. Even though it felt like there were nearly a dozen characters I wanted to track, I didn't have any trouble remembering where each one was, and what was about to happen to them. These characters struggle, make mistakes, sometimes succeed and often fail. There are a few characters who are purely ignoble, and others who are treated as rats by others but who we see, behind the scenes, striving to do what's right for their kingdoms or their fellows.

The biggest bloodbath in the book comes as a complete surprise, even though Martin ensured that we and the characters had enough indications that something was coming that I should have guessed, and they should have been prepared.

If you don't mind long convoluted plots when they're well told, I strongly recommend the series. There's a bit of fantasy (magic and dragons), but Martin keeps it under control. There's little that's libertarian here, but there's plenty to admire and despise about people's characters. They don't always get their just desserts quickly, but usually they do eventually. I've found the series to be an engrossing read.

Filed in:

Zocalo Release 2006.4a available

A new release of the Zocalo Prediction Market Toolkit is available. The main change in this release is a rehaul of the way that database transactions are managed. Transactions are now begun when starting to process an HTTP request and committed when the request finishes. This fixes a bug reported to me by a user, which I entered in the SourceForge bug list. The release adds a configuration parameter (in zocalo.conf) to control the amount of traders' initial stakes in response to a request from a user. Also in response to a suggestion, I converted all the shell scripts from csh to the Bourne shell (sh) since sh is available on more platforms. And there's now a common top-level directory at the base of the tar and zip files in the distribution so the extracted files are grouped better.

The transaction management change fixes a bug I have seen occasionally for a while, but that only showed up consistently in the last month in a way that allowed me to track it down. I think there are now no outstanding bugs, though there are still many missing features. If there's a feature that you'd really like added soon, let me know, and I'll add it to my list. Many requests are simple enough that I can get to them immediately. There are also features that will take a while to build, and I have to prioritize those somehow; your suggestions will increase the likelihood of my getting to them sooner.

I uploaded 2006.4 on Saturday, announced it on sourceforge and mentioned it to one other user who had asked when it would be ready. Before I had a chance to blog the announcement, I received a note about a minor problem. I had changed the installation process to use the default HTTP port (80) for the server, but realized while doing a test installation that that port isn't accessible for user-level installs on many platforms, so changed back to the old port number. In changing back, I fumble-fingered the fix in producing 2006.4, so the current release is 2006.4a. I recommend switching to is as soon as you reasonably can.

Sunday, August 20, 2006

San Francisco Mime Troupe: Godfellas

Yesterday we went to see the current show by the San Francisco Mime Troupe. Normally, I keep quiet about my appreciation of the Mime Troupe; they're fundamentally marxists as far as I can tell. But, as Janet says, they're further from the mainstream than I am, so they can be relied on to skewer whoever is in power, and they have a great sense of humor. And the old trope seems true to me: the left is much better at producing entertainment than the right. I should point out that this is a musical comedy, performed outdoors on a their portable stage. The Mime Troupe does political theater. "Mime" is used in its ancient sense of parody.

But all that aside, their current show, Godfellas is wonderfully libertarian. The heroine's constant inspiration is Thomas Paine, who is quoted liberally, and even appears on stage a couple of times. The theme of the show is religious control of society and people's thoughts. The show's protagonist is a teacher who runs an after-school program which is taken over by religious extremists (in a move that would only make sense to people who don't understand ownership or who believe that bureaucracies are always all-powerful). She decides to fight back and starts a movement that ends up going head-to-head with the religious syndicates that are pushing for a constitutional amendment to institute a national day of mandatory prayer.

The rhetoric is clear, pro-reason, and probably offensive to any even mildly religious audience. The Palo Alto crowd (primarily left-wing) seemed to love it.

The Mime Troupe has another month left on their summer run. They'll be performing the show in San Francisco, Berkeley, Sacramento, Santa Cruz, and several other locations throughout central and northern California through October 1st. If the show sounds like it might be appropriate in your area they're trying to put together an East Coast tour next year. Contact the Mime Troupe for details.

Tuesday, August 15, 2006

Claim Definition in Prediction Markets

Claim definition in Prediction Markets is hard to get right, but good claim descriptions make judging easier. An intrinsic part of the challenge is that the best claims are close calls until near the ending date. That means you want to choose a deadline and terms on which people disagree about the likely outcomes at the outset. You also want claims that will resolve themselves, and be easy to judge at the end; it's a benefit to everyone if the answer is clear when the claim closes. This is much easier to achieve with practice in any particular area--routine claims formats can be polished over time. So the easiest claims to write are for highly repetitive events like sporting events and elections. I expect the same will become true of predicting project completions and corporate quarterly performance targets, but currently those issues are being described, argued about and judged in private, so there's no shared repository of experience to draw on.

The fact that pay-outs are limited to the amount spent to purchase claims is integral to the institution of prediction markets. If market operators ever pay off both sides of a claim, that is likely to encourage investors to protest many more close calls. Refunding all investments would have a much less deleterious effect than paying off all parties. In disputed calls in which evidence may not be available immediately after an event occurs, it's an advantage to the market to be able to roll back betting that occurred after the event was retroactively determined to have occurred. (TradeSports has a clause in their rules (Contract Rules 1.8) that allows them to unwind trades occurring after one hour before the first press report of outcome. The rule applies to claims when this is explicitly stated beforehand. In the NK Missile case, not having received confirmation, there's no time to roll back to. If they had decided that the terms had been satisfied, they would have been able to roll back the post-launch betting.)

As long as claims are designed so the market operator doesn't take a position on the claim, the operator won't have any conflict of interest in deciding the claim. There are other good reasons for choosing an outside judge. The market operator may be pressured to change a judgment or to pay both sides in a disputed claim; if they are also the judge, this pressure is stronger. If the judgment is from an external party, it will at least be seen as independent. If all judgments are issued with the authority of the exchange, losing bettors will blame the exchange itself. I think identifying the judges and showing their track records is a service to the investors. There may not be many businesses willing to trust judging to outside parties, however. I fear they will feel the heat on close calls, and there will necessarily be close calls.

It's important to distinguish sources from judges. The judge makes the decision for the exchange about the outcome. In my view, claims should include a reference to preferred sources. For some subjects it's reasonable to rely on newspapers, for others peer-reviewed journals are more appropriate. Often a government agency has the responsibility to and a history of reporting on a particular class of outcome. If no source is specified, there will be disputes (at least when money is at stake) based on obscure media reports. It's better to make it clear what level of assurance is required. It's unfortunate, but sources sometimes go against their historical practices, so it's usually valuable to say how the outcome will be judged in the absence of an official statement. When sources are named, it's crucial to the exchange's reliability that they not be changed without prior announcement. (I don't fault TEN (TradeSports) for continuing to rely on the DoD in the recent contretemps over the North Korean missiles. They said the DoD would be their source, and some traders were relying on that statement. It would have been improper for TEN to act without a public statement from the DoD or a direct statement to TEN; a statement to a subscriber isn't something TEN should rely on for a judgment.)

Chris Masse has suggested that the intent and the wording of a claim should always coincide. But this isn't possible in practice, as is well known to anyone who has managed or participated in the process from start to finish. Claims that seem clear when written (remember that it's a goal to make the outcome hard to determine at the outset) often turn out to be unclear later. Over time, the meanings of terms mutate, and their application to specific questions is modified as events transpire.

Since we strive for close calls, the timing is also often sensitive. Some events aren't publicized immediately, so claims should specify whether the date of the event or the date of publication controls the outcome. (Often in FX, both are specified for science claims.)

We often look for externally judged claims; if there's a prize being awarded, that often makes a good claim. But the claims need to be careful to specify whether the underlying achievement controls or the awarding of the prize determines the outcome. What happens if the judging doesn't occur (the judging organization goes out of business or stops overseeing that type of event) or if the judge awards a consolation prize? (See the recent SENS challenge at Technology Review.) Sometimes the disappearance of the judge or the dropping of a contest is material; often the participants won't agree on how "the spirit of the claim" applies to these edge cases.

In general, the judge should pay attention first to the text of the claim, and second (in cases where this is accessible) to the discussion of the claim before the wording is finalized. On FX there is often plenty of context on what a claim means that doesn't make it into the claim. The conversation between sponsor, judge, and community can be very informative including what wording was intentionally not included in the claim. It's unfortunate that most of the high-profile markets have a closed conversation (if any) on claim wording, and then produce short claims with no discussion of edge cases.

If there's no visible discussion of the intent of the claim, the only thing investors have to go on is the plain text. Asking them to guess how the judge or exchange will interpret the intent is much less certain than assuring them that the wording will control. Everyone has a good argument for why their interpretation of the intent is the one that makes sense. There's no reason to let this be the basis for complaints and arguments.

TEN hasn't addressed this issue, but it seems clear to me that they consider their positive claims on current events to be the controlling language. The negative position is a shorthand description that the event didn't take place. When they cite a source and require confirmation, then an absence of confirmation clearly requires a "no" outcome. NewsFutures ensures that the two sides of a claim are duals; neither has precedence over the other in the way the interface presents them, but this gives them a reason to make the wording consistent. The arguments that TEN needed to confirm that the missiles weren't fired or didn't land in international waters don't seem reasonable to me. I'm reasonably sure some of the proponents haven't found this distinction between TEN's positive and negative claims. Maybe they'll add "or no confirmation is obtained" to their standard list of negative cases.