Tuesday, September 26, 2006

Tools I use: Nostalgy for Thunderbird

I haven't posted anything in this category for a while, but I recently found an extension for Thunderbird that has really made my day. I've noticed for a while that re-filing messages is too mouse-oriented for my tastes, and have hoped to find a solution.

You see, I'm a keyboard-oriented person. I use emacs as my preferred text editor, but even when I use Word I learn all the keyboard shortcuts, and customize the interface to add more shortcuts so I can do as much as possible from the keyboard. I know the keyboard shortcuts for most of the programs I use regularly, and have little trouble keeping them straight between different programs.

I often say "Bill (meaning Microsoft) thinks people feel productive when they're moving the mouse, and he wants you to feel productive, so he makes sure you have to move the mouse to get anything done." The Mac has unfortunately followed in this direction more and more recently with many monologue boxes (my term for a confirmation window with only one choice.) And too many of the dialogue boxes won't let you tab to a different choice: you have to move the mouse. I thought this went against usability guidelines, but it increases nonetheless.

So anyway, back to Thunderbird. In order to refile messages in off-the-shelf Thunderbird, you can either navigate menus manually, or drag the message to a folder. If you have a large folder tree (as I do), this can be a serious hassle. I haven't figured out how to invoke the menu tree from the keyboard, since the menu is hierarchical, and AFAICT you can only add a shortcut for a leaf of the menu hierarchy. That means invoking the menubar (minimum 5 keystrokes or a mouse movement) or a context menu (mouse movement) then navigating the menus (mouse movement at each level, sub-menus might be on the right or the left; keyboard requires moving up and down with arrow keys, since individual menu items don't have live shortcuts).

Steve Putz's mail reader, Babar (only available in the version of Smalltalk that was used at PARC; the user community never exceeded 50) had a short menu of the most recently used folders that you could reach quickly for filing or visiting. The other alternative is a quick textual search from the keyboard. The Emacs front-end for mh (which I still use regularly to peruse my filtered junk mail) supports this.

Now there's a new (first release was in May) extension for Thunderbird that supports filing from the keyboard. It's called Nostalgy, and it was written by Alain Frisch. Hit a key: "s" (for save?) to refile, "c" to copy, and you can then type a regular expression while the applicable list is narrowed down as you type. And the most recently used folder is immediately accessible by capitalizing the command key, so filing to the same folder you just used (which is quite common) is just "S".

It's wonderful!

Filed in:

Wednesday, September 20, 2006

Continuous Outcomes: Bands, Ladders, and Scaled Claims

The most common prediction markets we see are binary markets; those with two possible outcomes. Will Candidate X be reelected, which team will win a sports contest, will so-and-so be convicted, etc. The next most common is determining the outcome from a list of possibilities known before the event: winner of a multi-candidate election, the World Cup, or the Super Bowl. This post talks about another kind of market: predicting the value of a continuous variable, as in how much snowfall will NYC get this winter, how many cases of flu will there be in Iowa next month, the level of a company's sales, the value of the Dow-Jones Industrial Average. Even "When will social security reform be passed?" and "When will the new product be shipped?" can be expressed in this format: the date of occurrence is the predicted value.

There are three common approaches to predicting continuous variables. I call them price bands, price ladders, and scaled claims. In the price ladder representation, a series of securities is offered, and each one is phrased as "the value will be lower than X". A series of securities with different values of X can cover all the possibilities (as long as the sequence is capped by a security that includes all other values). Price bands phrase each security's claim as "the value will be between X and Y". (These need to be capped on both ends by "W or less" and "Z or more".) Scaled claims represent the same kind of question with a single security that pays off a variable amount determined by the outcome.

Bands and Ladders are duals: any bet that can be made in one system can be made in the other with a structured bet. If you believe that the outcome will be between X and Y, and the market offers only ladders, then you buy X and sell Y. If the market offers bands and you want to bet that the price will be above (or below) some value, you buy all the securities at or above that value. Scaled claims don't offer these options.

The question to answer in designing these markets is which approach is more convenient to the trader, and more amenable to analysis. (If the output of these markets are predictions, then we want usable predictions.) In order to offer markets in continuous outcomes, the market operator has to decide what a plausible range of outcomes would be, and decide what possibilities are reasonable choices. That helps determine how many bands and how wide they should be. If the consensus view expects a value to be between 1200 and 1500, someone offering bands or ladders wants to set things up so there are choices ranging from 1100 to 1600, with some choices within the consensus forecast. With variable payouts, if the value is going to change very much, the outcomes of interest should take up as much of the probability space as possible. On FX (where design of the securities is a subject of public discussion) there is often a discussion about what outcomes are most likely in order to choose a wide active range that the price will move around in. If the market operator can't tell where the contentious issue will lie, they are forced to choose between many securities, most of which will be uninterested, or a few wide bands, and risk that there's little disagreement on the outcome (to the resolution the claim provides) for much of the claim's lifetime.

It isn't necessary that the bands be of equal sizes. When the magnitude of the outcome is highly variable, a log scale can be used. On FX, log scales are often considered for death tolls for epidemics or other disasters, though I couldn't find any claims that ended up using it. It can also make sense to have narrower bands in the region of the most likely outcomes, and wider bands further away. Robin Hanson suggested another approach: split up the bands as the consensus changes. In order to be fair to the traders, the securities should cover all the possibilities (i.e. no gaps or excluded ranges; most easily done by having the lowest range be "< X" and the highest be "> Y".) If one band has a high percentage of the interest, split it into sub-ranges, and give each investor who owns shares in the range being split a corresponding number of each of the new shares. Reverse splits can't be done cleanly, so it's important to not split too soon or too often if the UI doesn't handle lots of claims well.

Another weakness of scaled claims is that the market only produces one price, so you can't tell when investors' opinions are bimodal. For instance, if many people think that there will either be a huge epidemic or a low effect, with medium epidemics being unlikely.

The market will be more liquid if people can express their view without having to buy multiple securities to cover their expectations. So to some extent the best choice depends on whether people are more likely to think they know the most likely value, or more likely to think "the value will be at least X" (or "at most X"). I think people usually find it easier to decide on a maximum or minimum value than a most-likely range. On the other hand, price bands make it easier to understand the market's forecast. The calculations of implied prices based on the prices of puts and calls in the stock market are complicated, and deriving a prediction from prices of laddered securities should be just as involved.

If the securities are built as price bands, the approach to improving liquidity in N-way markets that I described in a previous article would be applicable. The same trick doesn't apply to laddered securities, since those assets can't be expressed as linear combinations of one another.

TradeSports had a market in snowfall in New York City last winter, and many of their financial bets are for continuous variables. They also offered their bets on capture of Saddam Hussein and Osama bin Laden as well as the date of passage of Social Security reform as a series of deadline dates, which have the same semantics as ladders.

FX has scaled claims, and this form is also used by IEM for judging the popular vote (as opposed to the winner-take-all market based on the elector college outcome.) IEM's local flu markets (currently inactive?) were done as price bands as well (different colors represented different observed rates of flu.)

HedgeStreet is currently using overlapping bands in some of their markets (50-60, 55-65, 60-70). This gives more choices to the customer, but that means it splits the liquidity. Doubling the number of outcomes investors have to pay attention to cuts liquidity approximately in half. I can't think of an advantage of this choice. HedgeStreet also has price bands that aren't capped, so it's possible that none of the bands will include the outcome.

Inkling's and CrowdIQ's markets ask you to pick a single winner from a list. These can be used for either price bands or ladders, and some markets there are being done that way.

NewsFutures isn't currently offering any contracts on continuous variables that I could find. I don't remember any in the past either. HSX's stocks are open-ended continuous payout securities. Yahoo! Tech Buzz has discreet outcomes with payouts proportinal to the search measure.

Previous articles in this series:

Tuesday, September 19, 2006

Kim Stanley Robinson: Forty Signs of Rain

Kim Stanley Robinson's Forty Signs of Rain is a lightweight eco-thriller. The principal characters are climate scientists working in Washington DC and San Diego while the climate gradually worsens. The time-scales are short enough that it isn't plausible to treat the increasing storms as more than anecdotes, but they're big enough (the mall in Washington DC floods because of a combination of high tides and storms feeding both watersheds that drain into the Potomac) to be striking to most readers. The kind of striking image that leads people to reasoning from fictional evidence.

But the story is engaging, the characters are plausible examples of the stereotypical focused scientist. Many loose ends are left hanging to the sequels. The only science fiction seems to be the speed of onset of global warming. There are several scenes involving start-up biotech firms, and the politics of recruiting. There's also a significant amount of time spent in Washington, both in giving grants to scientists, and lobbying congress to do something about the state of the world. It all rings reasonably true, and will be familiar to anyone who has worked in technology, academy, or government.

One of the characters is an ex-rock climber. I didn't find anything discordant in any of the climbing scenes. I've even been involved in rappelling into a building atrium (not solo, though) and found that description plausible as well.

Fun, but not deep. The whole purpose seemed to be to show us a powerful vignette of the effects of global warming, and I'll admit that the image is powerful.

Sunday, September 17, 2006

Jaffe & Lerner: Innovation and Its Discontents

Innovation and Its Discontents by Adam Jaffe and Josh Lerner is a level-headed appraisal of the problems with the current patent system in the US, accompanied by some common sense proposals to address them. In discussions of problems with the patent system, I have said for a while that the problem is that the patent office is issuing bad patents, and argued that people should be careful not to conclude from the effects of these bad patents that patents are a bad idea.

Jaffe & Lerner seem to agree with this point of view. They describe the problems of the patent office as being poor incentive structure in the patent office, compounded by procedural rules in the patent courts that presume that patents are being issued competently. The budget of the patent office has been cut, and the incentives on individual patent clerks push toward easy approval; there is no cost to issuing too many patents, and a high cost to spending extra time reviewing. Once patents get to the patent court, the rules are stacked in favor of the patent holder by a presumption that the patent review was performed competently. All this combines to produce a glut of lousy patents that are hard to attack.

Patents that are easy to earn and hard to dislodge can be a serious drain on innovation, because inventors and innovators necessarily re-use previous ideas in developing new solutions. When only major innovations gain the right to exclude rivals, the incentives ought to encourage inventors to describe their breakthroughs so others can exploit them after the exclusion period, and minor advances don't turn into roadblocks.

One somewhat surprising point that Jaffe and Lerner make quite clearly is that the patent system doesn't have to screen out worthless patents perfectly. As long as it's possible to overturn undeserved patents, allowing too many patents wouldn't be a problem, since most patents don't lead to commercial products. If it's not too expensive to defend against a claim of infringement based on an weak patent, then the system doesn't have to prevent all bad patents. If we can't remove the strong presumption the current system makes in favor of patent holders, the only resolutions would be to look for reforms that prevent bad patents from being issued or to advocate the complete repeal of patents.

Their proposals are much more conservative than this, and if they are adopted relatively intact, they seem to have a fair chance of improving the situation greatly.

Their proposal has three parts: 1) make it possible for people to provide relevant prior art before a patent is granted without precluding the prior art from being used later in court (the current system assumes that prior art that the patent office knew about before granting a patent was correctly considered by the examiner, and so it is excluded from later use in court challenges.) 2) provide escalating levels of review and challenge so that weak patents can be challenged cheaply, and important patents get an appropriate level of review without too strong a presumption about the outcome, and 3) drop the option of jury trials, and move to a system of judges and expert special masters. (Juries seldom understand the issues in a patent case, and the presumptions they are instructed to make lead them to decide for the patent holder whenever they are confused.)

Another argument that the authors make persuasively is that it would be a mistake to advocate different rules for patents in different fields. First, this would have the effect of pushing patenters to couch their patents in whatever terms give them the most advantage. The authors show that this has gone on with respect to the different treatment that business method, pharmaceutical, and software patents get currently. Secondly, if the rules are variable across fields, every lobby will have an incentive to make a case that they are special in some way. We'll all be better off if we can get simple across-the-board reforms implemented that limit patents to real breakthroughs, and make it possible for innovators to proceed without being obstructed by patents on minutiae well-known techniques.

I have no idea what the chances are that these reforms might be considered in the current political environment. It appears that lawyers, as a coalition, like the current system, but it's not clear why any innovative company would prefer it, even if they have a large patent portfolio at present.

Tuesday, September 12, 2006

Zocalo 2006.5 released: Installer for Windows

I've released a new version of the Zocalo Prediction Market Toolkit. Zocalo now has an installer for Windows. This should make it simple to install the prediction market version on Windows platforms. If you've already installed, there's nothing else worth upgrading for, but if you were waiting for a simpler install on Windows, this is it.

The 2006.3 release was the first to support deployment on Windows, but it was packaged as a .zip file with manual instructions. I got a couple of requests for a windows-style installer, so I figured out how to build one using the open source NSIS package. It took me a week and a half to get everything right, but I think the result is worth it. It's a professional looking installer that prompts for some configuration parameters, and verifies their values before inserting them in a configuration file.

I didn't make an installer for the experiment version. If you've been holding off installing that because of a perception that installing on Windows from a zip file was going to be too hard, let me know. I'd guess that building another installer will take a few days now that I've figured out all the quirks of NSIS that mattered for Zocalo.

Saturday, September 09, 2006

Michael Lockwood: The Labyrinth of Time

Michael Lockwood's The Labyrinth of Time starts out as an explanation of the physical structure of time, but morphs gradually into a treatise on anything Lockwood is interested in that can be remotely connected to time. Lockwood's interests include time travel, quantum electrodynamics, black holes, and quantum gravity. His exposition is better in areas that are clearly physics, and worse on things like the meaning of causality and time travel.

Lockwood starts by presenting two basic ways of thinking about time. In one, the past is fixed and the future is mere possibility turning into the unchanging past as we move past it. The other says that both the past and future are fixed, our point of view is called the present, and as time passes, we get to watch a narrow slice of the fixed 4-dimensional reality pass by. Most of physics works equally well whichever direction time moves, and Lockwood wants to explain why the past and the future seem different to us. We remember the past and not the future. In the end, this paradox wasn't cleared up for me.

Lockwood does present some good new visualizations that clarify space-time and various "problems" with reasoning about the speed of light. His presentation of simultaneity, using a moving train car with two different observers was remarkably clear, as was his explanation of the twin paradox using two unaccelerated paths through a hypothetical cylindrically connected space-time. The presentations of the physics of black holes (and how to use a black hole as a power source) were also good, but I didn't get much out of his approaches to string theory or quantum gravity. I think part of it was that the presentations were more hurried nearer the end of the book; he did better when he took the time to explain the basic concepts clearly first before getting to the advanced ideas. I suspect that except for people steeped in these later areas, his explanations will come up short.

Lockwood kept coming back to time travel paradoxes, without ultimately resolving the issue. Apparently the standard interpretations of physics say that paradoxes aren't allowed, and the open question is what mechanism the universe will use to prevent them. Apparently, if you don't believe in many worlds, you are forced to believe that there is only one past and a single future that will eventualy unfold. This doesn't allow causal loops (even though the fundamental equations seems to countenance them) much less time travellers acting in the past to prevent their known futures from (re-)occuring. I think the underlying theories force you to accept a single past even if you don't go for many worlds. So, whether the past becomes fixed as we journey with the moving present, or our viewpoint moves across an already fixed landscape, you're stuck.

I don't see how these arguments would convince anyone that the universe would act to prevent closed time-like loops. Until we see events conspiring to prevent causal loops, I'm happier reconciling the claim that physics allows time travel by accepting that space-time may be a causal spiral than expecting the universe to allow time travel while conspiring subtly to prevent someone from killing her grandfather. But as Norm Hardy pointed out, some of the phyisicists who seem to understand the equations say that the equations force you to believe in an invariant past. And in the end, Einstein's ability to predict, based on the equations, a lot of what we now believe and that experiments have confirmed, says that that's a strong argument.

Peter McCluskey also wrote about Labyrinth.

Filed in: