Showing posts with label Epistemology. Show all posts
Showing posts with label Epistemology. Show all posts

Wednesday, February 13, 2013

Thinking Fast and Slow: Daniel Kahneman

Daniel Kahneman's Thinking Fast and Slow was much better than I expected it to be. Not that I wasn't expecting it to be well written or interesting, just that I expected that since Kahneman's results and views have been covered in great depth in a lot of other works I've read, I didn't expect much to be new. Even if you're fairly familiar with Kahneman's results and ideas, the book presents them well, and gives good advice on how to take advantage of your brain's predilections and work around its shortcomings.


Kahneman is well known as the progenitor (with Amos Tversky) of the Heuristics and Biases literature. You will find references to their research and results in lots of popular presentations on how people think, and the various ways in which people are prone to mistaken beliefs and sub-optimal actions. In Thinking Fast and Slow, he presents a unified discussion of this work, along with some solid suggestions for integrating the conclusions into your approach to life so that you can get more of what you want and be happier.


The basic theory is that we have two main approaches to problem solving with divergent benefits. The fast thinking part ("System One") is ready to make snap judgements on any subject at any time. It is fast, but it takes lots of shortcuts, and doesn't even bother to choose an optimal shortcut. Whatever answer first presents itself to this part of our minds is latched onto, because the evolutionary benefit was in having some answer quickly in case our ancestors needed to react immediately. The other approach is slow and deliberate, and involves evaluating lots of alternatives and consciously weighing benefits as well as the appropriateness of each approach to the current problem. The problem with System Two is that it's expensive, and for good evolutionary reasons your instincts always offer a quick and dirty response before there's time to consider more carefully.


Kahneman spends the bulk of the book giving lots of examples of particular, named, classes of mistakes we make ("Availability Heuristic", "Illusion of Validity", "Endowment effect", etc). It's probably useful to be aware of these classes if you want to reason more clearly, but I see the main value of Kahneman's approach to be in making us aware that our snap judgements are suspect. There are good reasons for each bias, which explains why evolution selected for that particular outcome, but whenever you're not in a life-and-death race to escape a lion, it pays to be attentive to your innate biases and consider your options more carefully. Having names for a catalog of short-sighted trade-offs you are likely to have gravitated to makes it easier to see which first guesses to re-think.


The final section of the book follows another perspective, also first identified by Kahneman and Tversky. This is the idea that our "Experiencing Self" and our "Remembering Self" have different evaluations when comparing things we do, which can lead to strange trade-offs when choosing what to do. The author argues that our memories systematically underweight pain we experience and consistently get some things wrong about enjoyable times, leading us to guess incorrectly about what kinds of situations we'd prefer in the future.


Experimental evidence shows that peoples' memories of painful episodes (dentist visits, for example) are dominated by the experience of the final moments of the experience, neglecting how painful earlier parts were. This means that adding 5 minutes of sligtly painful procedures to the end of a very-painful 15 minute procedure actually makes people remember the whole incident as having been less painful. Many people argue that it's clearly wrong to choose 20 minutes of pain over 15 minutes of pain, but this is not obvious to me. The 15 minute session should also carry the burden of all the subsequent time when the patient had to remember the more painful portions more clearly. The 20 minute session may have included more pain while in the chair, but the experiments show that the patients were less upset long afterward, partly because they had less gripping memories subsequently. So, as I see it, it's less of a contradiction than Kahneman believes.


On the other side, our recollections of enjoyable situations are also skewed. We tend to neglect long periods of time spent in pleasurable avocations (Kahneman calls it "duration neglect"), and when asked to choose how to spend our time or money, people often opt for the choice with a more easily recalled high point, regardless of the duration or enjoyability of the entire experience. Kahneman recommends that when planning vacations, or choosing other ways to spend our time, we focus more on the ongoing experience rather than the extremes. He's pretty convinced that we'll get more out of life that way. The counter is that when recalling our lives we'll be subject to just these biases, and regardless of how much joy there was in the small moments, we'll focus on the highs and lows when remembering our story or telling it to other people. It's food for thought in either case.

Monday, March 02, 2009

Stephen Ziliak and Deirdre McCloskey: The Cult of Statistical Significance

Stephen Ziliak and Deirdre McCloskey's The Cult of Statistical Significance is a poorly argued rant about what appears to be an important topic on the pursuit of scientific knowledge. Ziliak and McCloskey argue that many of the statistical sciences have been using the wrong metric to determine whether the results of experiments are interesting and relevant. They report on a few detailed reviews of articles in top journals in economics, psychology, and other fields to show that the problem they describe is real and pervasive. Unfortunately, they are much more interested in casting aspersions on the work and influence of Ronald Fisher and building up his colleague William Gosset, and so they don't actually explain how to apply their preferred approach. In amongst the rant, they do manage to make the defects of Fisher's approach clear, though it's tedious reading.

The basic story is that Fisher argued that the main point of science is establishing what we know, and to that end, the important result of any scientific experiment is a clear statement of whether the results are statistically significant. According to Fisher, that tells you what confidence you should have that the results would be repeated if you ran the experiment again. Ziliak and McCloskey want you to understand that a result can be statistically significant but practically useless. And there are worse cases, where statistical significance and Fisher's approach leads scientists to hide more relevant results, or worse to conclude that a proposal was ineffective when the data show that a large effect might be present, but the experiment failed to show that it was certain. Ziliak and McCloskey want scientists to primarily report the size of the effects they find, and their confidence in the result. To Ziliak and McCloskey, a large effect discovered in noisy data is far more important than a small effect in very clear data. They point out that with a large enough sample, every effect will be statistically significant. (Though they don't explain this point in any detail, nor give any numbers on what "large enough" means. I have an intuitive feeling for why this might be true, but this was just one of many points that wasn't presented clearly.)

They describe a few stories in detail to show the consequences for public policy. Vioxx was approved, they claim, because the tests of statistical significance allowed the scientists to fudge their results sufficiently to hide the deleterious effects. (It's not clear why this should be blamed on statistical significance rather than corruption.) They also present a case that a study of unemployment insurance in Illinois found a large effect ($4.29 in benefit for every dollar spent), but gave the Fisherian conclusion, not just that the result wasn't statistically significant, but that there was no effect. It turned out that a careful review of the data showed that the program had a statistically significant benefit-cost ratio of $7.07 for white women, but the overall benefit-cost ratio was not statistically significant because the $4.29 was only statistically significant at the .12 level, while under .05 or less is required by Fisher's followers.

Ziliak and McCloskey demonstrate that they're on the right side of the epistemological debate by supporting the use of Bayes' Law in describing scientific results, but beyond one example, they don't explain how a scientific paper should use it in presenting results. The use of Fisher's approach gives a clear guide: describe some hypotheses, perform some tests, finally analyze the results to show which relationships are significant. With Bayes, the reasoning, approach and explanation are more complicated; but Ziliak and McCloskey don't tell how to do it. Of the 29 references to Bayesian Theory in the index, 24 of them have descriptions like "Feynman advocates ...", or "Orthodox Fisherians oppose ...". There aren't any examples of how one might write a conclusion to a paper and show Bayesian reasoning, even though they pervasively give examples of analogous Fisherian reasoning that they find unacceptable.

Another significance question that Ziliak and McCloskey argue is important (but that they don't explain adequately) and that statistical significance hides is how much various treatments or alternate policy approaches might cost. Fisher's approach allows authors to publish that some proposal would have a statistically significant effect on a societal problem or the course of a disease and not mention that the cost is exorbitant and the effect small (though likely). Ziliak and McCloskey argue that journal editors should require authors to publish the magnitude of any effects and a comparison of costs and benefits. According to the reviews they've done and others they cite, it's common in top journals to omit this level of detail and to focus on whether experimental results are significantly different from zero.

Another of the authors' pet peeves is "testing for difference from zero". They claim that it's common for papers to report results as "statistically different from zero", when they're barely so. They use the epithet "sign testing" for this case. The lack of attention to the size of an effect that significance testing allows means that papers get published showing that some effects have a positive effect on a problem, even when the effect is barely different from a placebo. And there are enough scientists performing enough experiments today that many treatments with no real effect will reach this level of significance purely by chance.

Overall, the book spends far too much time on personalities and politics. Even when the discussion is substantive, too much effort goes into why the standard approach is mistaken and far too little on how to do science right, or why their preferred approaches would actually lead to better science.

For the layperson trying to follow the progress of science, and occasionally to dip into the literature to make a decision about what treatment to recommend to a family member or what supplements would best enhance longevity or health, the point is that scientific papers have to be read more carefully. Ziliak and McCloskey argue that editors, even of prestigious journals, are using the wrong metrics in choosing what papers to accept, and often pressure authors to present their results in formats that aren't useful for this purpose.

When reading papers, concentrate on the size and the costs of the effects being described. Significance can be relevant, but the fact that a paper appeared in a major publication doesn't mean that the effects being described are important or useful. Don't be surprised if the most-cited papers in some area don't actually present the circumstances in which an intervention would be useful. Don't assume that all "significant" effects are relevant or strong.

Friday, September 12, 2008

Nick Humphrey: Seeing Red

Nick Humphrey's Seeing Red is another attempt to explain consciousness, but from a slightly different angle. Humphrey clearly understands what it would mean to produce an explanation, and makes some progress on the task. Humphrey starts not with what it means to think about something or to be aware of something, but with the more fundamental fact of perception of something outside of ourselves. The focal perception is of a red sensation. There's something in your environment that produces the perception of redness. What just happened to you? What does it mean that it makes you sense the presence of red? Why can you share this experience with others who also perceive the redness or with people who aren't present but still understand what you mean?

Humphrey first concentrates his attention on the internal details: first you perceive, then you become aware that you are perceiving. You may put words to the sensation or you might not, but Humphrey takes pains to point out that the perceiving and awareness are two separate facts. If you then talk to someone else about the perception (which you can do because you're aware of it), then of necessity each of you has some kind of "theory of mind"; a mental model that represents the fact that whatever it means to perceive, you are something that can do it, and other people are capable of the same thing.

Having set these aspects of reality out, Humphrey goes to some trouble to demonstrate that they are separate facets of reality, and all need to be present in an actual explanation. He talks about things like 'blindsight' and optical illusions in order to convince people who aren't keeping up that all these things are distinct facets of reality and need to be distinct in any explanation.

In the second half of this small book, Humphrey explains that consciousness arises out of the neurons in the brain, and that their role is to reflect and represent what's really going on in the world. He wants to present an evolutionary explanation of why they arose, but he only really justifies the fact that they are useful. The mechanism and history that allowed a feedback process between sensing and acting to arise and be passed down as a competitive advantage eludes him. And he doesn't have much to say about how the neural substrate might represent facts about reality in such a way that it could actually be useful to an aware, active agent interacting with the world.

My bottom line is that this book lays out the issues fairly clearly in a way that ought to be interesting and convincing to someone who is just starting to think about how consciousness might work, but the explanations fall short of answering the deeper questions. On the other hand, Humphrey's stated goal in the book is to show that consciousness matters and that it can be productive to think carefully about it. That much he succeeded at.

Wednesday, December 05, 2007

Keith Stanovich: The Robot's Rebellion

Keith Stanovich, in his book The Robot's Rebellion , takes the stance that we are vehicles driven by our genes and memes, and tries to give us the tools and a place to stand to figure out what matters to us. (The metaphor is that we are robots driven by these influences, and we should want to regain control for ourselves.) Since the only tools we can use to reason with and all of our values are held by and in the control of our genes and memes, this is a daunting task.

Without explicitly recognizing that he's discussing epistemology, Stanovich does a commendable job of presenting a summary of the current research on standard biases in human reasoning. Once you understand the predilections of the tools you rely on, you can try to compensate for them, and start to figure out what you want. Stanovich's proposal is fundamentally consonant with Pancritical Rationalism. (Which is the source of the name of this blog.) The metaphor he uses is that of repairing a ship plank-by-plank while at sea. Regardless of how much or how little confidence you have in the current framework, you have to stand somewhere in order to start the process of examining what's there and replacing parts you don't have confidence in.

Much of the book repeats stories and results that have been widely reported in such popular books as Stumbling on Happiness, Adapting Minds, The Mating Mind, and The Blank Slate, but this material is easy to skim. Stanovich spends a lot of ink explaining that some of our analysis is done by mechanisms that are built-in and harder to introspect on or to change. This is relevant later when he talks about reconciling different desires.

One example of meta-rationality that Stanovich presents well is the point that introspection on your values may lead you to find apparent conflicts: you enjoy doing something, but wish you enjoyed it less, or you don't enjoy it and you wish you did. He provides a notation for talking about this kind of situation which I found kind of clumsy, but the idea of thinking about such things and having a language for analyzing them is valuable. He explains why you might have these conflicts, and why it is valuable to reason about the conflict from a viewpoint that is meta to both. Once you decide which desire is more important, he also shows that it's possible to use that understanding to bring your values into alignment, even when it's the more basic, inbuilt drive that you want to change. (I blogged last year about goals and meta-goals as ends and means).

Stanovich only spends about 20 pages on identifying and defusing opinions and desires that serve to protect your memes from your introspection, but these sections are his most valuable contribution. The memes that set up a self-reinforcing structure that forbid evaluation of the meme-complexes themselves are the ones that most deserve concentrated attention. I think he explains this point well enough that people in the grip of religious (or other defensive) ideas would be able to see how the prohibition on introspection only serves the meme-cluster, which might help them get over the hurdle and start down a reflective epistemological path, and figure out what their own goals are.

Unfortunately, Stanovich ends the book by trying to show that markets subvert the goal of reconciling our desires and meta-desires. His argument is that markets only pay attention to money, and so the people with the most money get what they want and everyone else gets nothing. What this misses is that of all the actually possible social institutions, markets are unique in not giving a few people complete control of the economy. In a market, some people have more money and therefore get to command more resources, but anyone who has some money can still use it to buy some of what they want. The great failing of socialism is that only the politicians get a voice. But this is a minor failing of the book. On the whole, it's nice to see a book that learns from Evolutionary Psychology, and uses those ideas to help people learn how to think about what they want.