COMMON BIASES WE NEED TO UNDERSTAND
Published at: http://www.archania.org
November 9, 2017
1 Selection bias in national history curricula
When we are studying the past, it is very difficult to be completely objective. Even if we don’t
produce fake histories, we cannot necessarily include everything in a history curriculum. So
we include some things, while we exclude other things. The problem is that nations tend to
favor historical events that make themselves appear more prestigious, while they tend to exclude
historical events that make themselves appear less prestigious. This can perpetuate conflicts.
Serious Medium Trivial Year Trivial Medium Serious
1300
1325
1350
1375
1400
1425
1450
1475
Transgressions against the Montagues
committed by the Capulets
Transgressions against the Capulets
committed by the Montagues
Curriculum in:
Montagues history classes
Curriculum in:
Capulets history classes
Perpetuating
Conflict
Figure 1: An example of how selection bias in history curricula can perpetuate a conflict. Most
countries have some selection bias in their history curricula, and this might be one of the main
reasons why the Israel-Palestine conflict never seems to end. It also seems to have been a
prominent reason for the Cold War, the Yugoslav Wars, and presumably many other conflicts.
You might assume however, that if we later in life are presented with historical facts which make
our countries seem less prestigious, we tend to get a more unbiased and objective understanding
of our histories. This is unfortunately not the case however, since we tend to ignore facts that
oppose our beliefs, while we actively seek more facts to strengthen our beliefs.
2 Confirmation bias
When people search for information on the Internet, they are likely to search for information that
confirms their beliefs, rather than for information that might be opposed to their beliefs. Even
when people are confronted with information that contradicts their beliefs, they are likely to
ignore it. This causes different political and religious groups to grow further apart, which again
creates more conflicts in the world.
My set of beliefs
Information that
confirms my beliefs
Information that
contradicts my beliefs
Figure 2: How I am much more interested in finding information that confirms my beliefs
We are in general much better at seeing correlations that we are looking for, than seeing
correlations we aren’t looking for. If for example we want to figure out if a symptom is
indicative of a disease, we might look upon if the infected are more likely to have the symptom
or not. From just looking upon this, we might erroneously start to believe that the symptom is
indicative of the being infected.
Infected
Symptom present 40
Symptom absent 10
Figure 3: The symptom is four times more common among the infected
It is however also possible to look upon if the people that aren’t infected are more likely to have
the symptom or not. From just looking upon this, we might start to erroneously believe that the
symptom is indicative of not being infected.
Not infected
Symptom present 800
Symptom absent 200
Figure 4: The symptom is four times more common among people that aren’t infected
If we compare all of these things, we might see that there is absolutely no correlation between
the symptom and the disease. The symptom is simply more prevalent among people in general;
both people that are infected and people that aren’t infected. The probability of being infected if
you have the symptom, is the same as the probability to have the infection in general.
P(S) = |S|
|A|=120
1050 = 0.8P(I) = |I|
|A|=50
1050 = 0.0476 P(SI) = |SI|
|A|=40
1050 = 0.0381
P(I|S) = P(SI )
P(S)=0.0381
0.8= 0.0476 = P(I) = 4.76%
This can also be related to political and religious convictions. We might selectively choose to look
only upon the prevalence of favorable things in our own religions and/or political affiliations,
without comparing it to the prevalence of these same favorable things in other religions and/or
political affiliations. Similarly, we might look upon the absence of adverse things in our own
religions and/or political affiliations, without comparing it to the absence of these same adverse
things in other religions and/or political affiliations.
3 The illusion of homogeneous perfection
People tend to associate perfection with one ethnicity, culture and/or personality type. Often
their own ethnicity, culture and/or personality type. This way of thinking fails to recognize the
benefits of diversity, as stated in the diversity prediction theorem.
(CX)2=1
nn
P
i=1
(xiX)2-1
nn
P
i=1
(xiC)2
The team’s square error = The mean square error - The diversity of the team
Figure 5: The diversity prediction theorem, formulated by Scott E. Page at the University of
Michigan[2] . A more detailed explanation of the theorem can be found here (PDF,HTML). The
theorem has huge implications for how one might choose to put together a team.
4 We only dislike closed-minded people if they adhere to other ideologies
We usually don’t mind closed-minded people that adhere to the same ideologies as us, while
we usually dislike closed-minded people that adhere to other ideologies[3]. On the other hand, if
people adhering to different ideologies are open-minded, we tend to like them much more. So we
should probably be a bit less tolerant of closed-minded people adhering to our own ideologies,
since we dislike so much closed-minded people that are adhering to other ideologies.
Adherence to my own ideology Adherence to a different ideology
Open-minded
Closed-minded
Open-minded
Closed-minded
Figure 6: How we usually don’t mind closed-minded people adhering to our own ideologies,
but dislike them when they are adhering to other ideologies.
5 The principle of least effort
According to the principle of least effort we tend to choose the alternative which requires least
effort[1]. It is analogous to the path of least resistance in physics, which says that rivers over
time usually will find the path with least resistance. When using search engines, we have a
tendency to avoid complicated explanations in favor of more simplistic explanations, even when
the complicated explanations are more accurate and/or more trustworthy.
Searching for
explanations
Simplistic explanations Complicated explanations
Requires most effort
to understand
Requires least effort
to understand
=
Figure 7: Since it requires more effort to understand complicated explanations, we have a
tendency to accept simplistic explanations, even if they are less accurate and/or less trustworthy.
This is why students often avoid topics that require a lot of work, in favor of topics that require
less work. It can further be related to the appeal of populism in politics. Populistic politicians
propose simplistic solutions to complicated problems, such as the war on drugs, the war on
terror, or building a wall to stop immigration. Since such simplistic solutions are easy for people
to comprehend, they tend to get widespread support, even if they aren’t necessarily the best
solutions to these complicated problems.
6 The fallacy of rosy retrospection
We derive more pleasure from thinking about nice things that happened to us in the past, than
from thinking about boring and distasteful things that happened to us in the past. So we have
a tendency to think more about nice things that happened to us in the past, and every time we
remember something, we strengthen the memory. We also modify it a little, to make it appear
even more agreeable, so that we can derive even more pleasure from thinking about it in the
future.
Figure 8: How we are inclined to remember the past as more colorful and beautiful than it
really was, since we tend to focus more upon nice memories than upon boring and distasteful
memories.
Over time this tends to make us get an overly positive image of the past. It also tends to make
us think that things are getting worse, or that society as a whole is in a state of decline. This way
of thinking also tends to lead civilizations to stagnation, since there is much more focus upon
reestablishing the past, than upon incorporating new ideas.
7 The introspection illusion
Sometimes when people are forced to explain their behavior or their choices, they might struggle
a bit with coming up with an explanation. But after a while, most people manage to come up with
some explanation. Research has however shown that these explanations tend to be fabrications,
rather than true reasons[4,5]. We are often not aware of why we are behaving in a certain way,
or why we made a choice, but if we are forced to come up with an explanation, we manage to
fabricate something. We also tend to believe in these fabrications ourselves, even though they
usually aren’t based upon why we really behaved like that, or why we really made that choice.
8 Dunning–Kruger effect
When we are starting to learn about a new topic or a new skill, we might overestimate our
competence, simply because we haven’t learned yet about all the things we don’t know or
haven’t mastered. As we learn more about what we don’t know or haven’t mastered, our
confidence tends to go down. If however we continue to learn our confidence might start to
increase again.
Novice Expert
Low
Medium
High
Overestimation of
competence
Realization of
how little you know
True competence
Experience
Confidence
9 The illusion of explanatory depth
1. The illusion of knowing more about things we care about
Research has shown that the more people care about something, the more they tend
think they know about it, regardless of if this is the case or not[6]. For example, people
that are heavily involved in environmental organizations or care a lot about
environmentalism, might erroneously think they know a lot about the scientific
theories related to global warming even if this isn’t necessarily the case.
2. The illusion of knowing more about things we use a lot
People often feel like they know how an item works, if they know how to use it[7]. For
example, people that drive a lot might think they have a better understanding of how
their car works than what is actually the case. Similarly, people that use computers and
cellphones a lot, might think they have a better understanding of how these devices
work, than what is actually the case.
10 Self-serving bias and the fundamental attribution error
We often ascribe our own successes to our superior skills, rather than to external circumstances.
When it comes to failures however, we tend to blame it on external circumstances. We are
probably better off with taking more responsibility for our failures, since it gives us motivation
to improve ourselves.
My failures are caused
by external circumstances
My successes are caused
by my superior skills
Figure 9: How we tend to regard our successes as being related to our superior skills, while
regarding our problems as being caused by external circumstances.
We are however good at blaming other people for their failures, without taking into consideration
that external circumstances might also influence their failures. This can lead to hostilities in
marriages and work environments.
My failures are caused
by external circumstances
Other people should blame
themselves for their failures
Figure 10: How we tend to blame external circumstances for our own problems, while neglecting
that other people might also fail due to external circumstances.
11 Selection bias in gender studies
Some feminists might argue that western democracies with equal rights for men and women
still are discriminating women, since there tends to be more men in favorable societal positions,
such as in manager positions. However, there also tends to be more men in prisons and in other
unfavorable societal positions. This might simply be due to a greater variability in IQ for men
than for women. It is actually selection bias to only focus upon one side of the spectrum.
55 70 85 100 115 130 145
IQ
People in unfavorable
societal positions
People in favorable
societal positions
Men
Women
Figure 11: Graph showing that we might expect more men in both favorable and in unfavorable
societal positions if men have a greater variability of IQ than women.
12 Neglect of base rate
Most tests have false positives, since there usually is a bit of luck and/or randomness involved.
For extremely rare conditions, these false positives can actually be far more common than the
true positives. However, people often tend to neglect the background probabilities for rare
conditions[8] . Such a rare condition might for example be to have more than 145 in IQ.
2%
14%
34%
34%
14%
2%
0.1%
0.1%
55 70 85 100 115 130 145
IQ
Figure 12: A normal distribution for IQ with associated percentages of the world population.
Let us imagine that someone developed an IQ-test which would predict if a person has an IQ of
more than 145 with 99% accuracy. So you take the IQ-test, and you score positively for more than
145 in IQ. Should you believe that you have indeed more than 145 in IQ? After all, the IQ-test
is supposed to be 99% accurate. However, since only 0.1% of the world population is supposed
to have more than 145 in IQ, you need to take this into consideration and use Bayes’ theorem to
find the real likelihood you have such a high IQ.
P(Positive test|145 in IQ)P(145 in IQ)
P(Positive test)=1.00×0.001
(1.00×0.001)+(0.01×0.999) =0.001
0.01099 = 0.09099 9%
Figure 13: The likelihood that you have indeed more than 145 in IQ, when the base rate is
taken into consideration. The likelihood is calculated using Bayes’ theorem. A more detailed
explanation of the theorem can be found here (PDF,HTML).
13 The law of large numbers
If you throw a fair coin, and assign the value 1 for heads, and the value 0 for tails, the average
value gets closer to the expected value (0.5) with more trails. For coin tosses, the average value
doesn’t seem to always get really close to the expected value before around 100 000 tosses. With
medicinal, nutritional and behavioral studies, there is always a bit of randomness for each
participant. This can be minimized by using a large number of participants.
1 10 100 1 000 10 000 100 000
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
Number of Trials
Average Value
Figure 14: How the average value of coin tosses gets closer to the expected value with more trials
14 Unreliable generalizations
1. Generalizations based upon a single individual
You might often hear people say something like this: ”I know a guy that smoked and
lived until he was 100, so smoking cannot possibly be that bad for you”. This is a
generalization based upon a single individual. As we have seen from the law of large
numbers, the average value of coin tosses varies widely until around 1000 trials, and we
don’t get really good estimates of the expected value before between 10 000 and 100 000
trials. So we need a large amount of individuals (preferably around 100 000) to make
reliable generalizations.
2. Generalizations based upon our friends
In order to make reliable generalizations, we also need to have a random selection of
people, and your friends are not a random selection of people. You might for example
work for a construction company, and most of your friends could be colleges from work.
If you generalized based upon your friends, you might erroneously start to believe that
people in general know a lot about construction.
15 Bibliography
[1] G. K. Zipf, Human Behavior and the Principle of Least Effort: An Introduction to Human Ecology.
Martino Fine Books, 2012.
[2] S. E. Page, “Where diversity comes from and why it matters?,” European Journal of Social
Psychology, vol. 44, p. 267–279, 2014.
[3] C. Wilson, V. Ottati, and E. Price, “Open-minded cognition: The attitude justification effect,”
The Journal of Positive Psychology, vol. 12, no. 1, pp. 47–58, 2017.
[4] T. D. Wilson, D. S. Dunn, D. Kraft, and D. J. Lisle, “Introspection, attitude change, and
attitude-behavior consistency: the disruptive effects of explaining why we feel the way we
do,” in Advances in Experimental Social Psychology, pp. 287–343, Elsevier, 1989.
[5] T. D. Wilson and J. W. Schooler, “Thinking too much: introspection can reduce the quality of
preferences and decisions,” J Pers Soc Psychol, vol. 60, pp. 181–192, Feb 1991.
[6] M. Fisher and F. C. Keil, “The illusion of argument justification.,” Journal of Experimental
Psychology: General, vol. 143, no. 1, pp. 425–433, 2014.
[7] L. Rozenblit and F. Keil, “The misunderstood limits of folk science: an illusion of explanatory
depth,” Cognitive Science, vol. 26, pp. 521–562, sep 2002.
[8] M. Bar-Hillel, “The base-rate fallacy in probability judgments,” Acta Psychologica, vol. 44,
pp. 211–233, may 1980.