# Vanity Fair's Anti-Bias Error

Stockbyte | Getty Images

Michael Lewis has a long review of a new book by Israeli psychologist Daniel Kahneman in this month's Vanity Fair.

The Nobel Prize winning Kahneman, along with fellow psychologist Amos Tversky, spent most of his career studying how people use all sorts of irrelevant criteria and often ignore lots of relevant data. For example, people tend to project probabilities for large populations onto small populations, even when the sample size is too small to expect that the small group will be representative of the large group.

Kahneman and Tversky have been hugely influential in economics, culminating in Kehneman's 2002 award for the Nobel Memorial Prize in Economics for his work on Prospect Theory, which describes how people choose between probablistic alternatives and estimate gains and losses. And, most importantly, the errors people typically make when making choices and estimating gains and losses. Behaviorial economics is highly indebted to their work.

But Vanity Fair has included a sidebar to Lewis' story that is provoking a lot of discussion on the web.

Here's the second question from the sidebar.

2. A team of psychologists performed personality tests on 100 professionals, of which 30 were engineers and 70 were lawyers. Brief descriptions were written for each subject. The following is a sample of one of the resulting descriptions: Jack is a 45-year-old man. He is married and has four children. He is generally conservative, careful, and ambitious. He shows no interest in political and social issues and spends most of his free time on his many hobbies, which include home carpentry, sailing, and mathematics.

What is the probability that Jack is one of the 30 engineers?

A. 10–40 percent

B. 40–60 percent

C. 60–80 percent

D. 80–100 percent

The correct answer, according to VF, is A.

Here's the explanation:

If you answered anything but A (the correct response being precisely 30 percent), you have fallen victim to the representativeness heuristic again, despite having just read about it. When Kahneman and Tversky performed this experiment, they found that a large percentage of participants overestimated the likelihood that Jack was an engineer, even though mathematically, there was only a 30-in-100 chance of that being true. This proclivity for attaching ourselves to rich details, especially ones that we believe are typical of a certain kind of person (i.e., all engineers must spend every weekend doing math puzzles), is yet another shortcoming of the hyper-efficient System 1.

This is mind-boggling. Are we really expected to believe that all that additional information we have about Jack does not at all effect the probability that he is a lawyer?

Many people think that Vanity Fair may have just made an error when trying to translate the experiment into a quiz for their website. There's a big, and somewhat nerdy, discussion of it over at Y-Combinator.

You can see how the VF writer, Jamie Lalinde, could fall into error here. What Kahneman calls the "Law of Small Numbers" is an attempt to demonstrate how we cannot accurately project what we know about large populations onto small groups. So here the idea would be that what we know about lawyers and engineers in general cannot be accurately projected onto the small sample size of 100 hundred professional. The group is not large enough for us to expect the members to be representative of engineers and lawyers in general.

But this does not mean that the additional information we have about Jack adds nothing at all to the probabilities. In fact, we know that lawyers tend to be more liberal, interested in political and social issues, and not especially interested in mathematics. I never encountered one person in law school who was not interested in political and social issues. In fact, most of law school involves discussing political and social issues. It would be hard to make it through three years of legal training without such an interest.

And, conversely, engineers are more conservative, often disinterested in political and social issues, and more likely to be into math.

These factors cannot be ignored, even in a small sample size. Believing the odds that Jack is an engineer are greater than "precisely 30 percent" is not "falling victim" to the representativeness heuristic.

Think of it this way. If we were told "Jack graduated from engineering graduate school" would that effect the odds? According to Lalinde's view, apparently not. After all, some lawyers went attended engineering graduate programs. So it is always possible that Jack is still a laywer. But this only tells us that the chances of Jack being a engineer are less than 100 percent. Certainly not that they remain "precisely 30 percent."

And, in fact, in their work, this is not how Kahneman and Tversky use this question about Jack at all. They used it to show that people's estimates of Jack being an engineer did not alter regardless of whether they were told engineers made up 30 percent of the group or 70 percent. In other words, people were ignoring the make-up of the group altogether. This, of course, could only be the wrong thing to do if it were impossible for someone with the traits Jack has to be a lawyer (which, actually, it may be).

In other words, Kahneman and Tversky show that people tend to employ a representativeness heuristic when deciding the odds that Jack is an engineer, assuming that the standard population-wide odds that someone with these traits would be an engineer to the small sample of 100 people. Lalinde has taken the wrong lesson from this, deciding that it is always wrong to allow the perception of stereotypical traits to influence our estimates of people. We could call Lalinde's mistake the Anti-Stereotype Bias.

(Hat tip to Steve Sailer)