As discussed in my last post, motivated reasoning is a pattern of thinking used in decision making (and elsewhere I suppose). It appears on its face to be evidence-based reasoning, but it departs from evidence-based reasoning at important junctures to produce the conclusions we've preferred from the outset. Those departures are possible, for example, when we weigh the strength of arguments or evidence, or when we set priorities for investigations and evidence gathering, or when we interpret evidence we find, or when we allocate time in meetings for discussion of particular points, or when we choose less capable or more capable individuals to present material relevant to pending decisions.
Motivated reasoning Motivated reasoning is a source of
bias in workplace decision making.
But in combination with a cognitive
bias known as the Pseudocertainty
Effect it can be especially damaging.is thus a source of bias in workplace decision making. But in combination with certain specific cognitive biases, its effects can be profound. In this post I explore the synergistic effects of motivated reasoning in combination with a cognitive bias known as the Pseudocertainty Effect. [Tversky 1981]
The Pseudocertainty Effect is a cognitive bias that affects our ability to make good decisions in situations that involve projecting outcomes from a sequence of decisions under uncertainty. Because of the Pseudocertainty Effect, we humans have a tendency to focus only on the final decision in the sequence, ignoring earlier-stage uncertainties.
Explaining the Pseudocertainty Effect is a little easier if I begin with the Certainty Effect. [Kahneman 1979.1] The Certainty Effect is a cognitive bias that causes us to value too highly the utility of those outcomes that are certain, as compared to the utility of outcomes that are merely probable. The use of the word "highly" is a bit tricky here, because the Certainty Effect applies for both welcome and unwelcome outcomes — for both positive and negative utility. That is, the Certainty Effect also causes us to overestimate the "damage" of an unwelcome outcome when that outcome is certain, as compared to the damages of unwelcome outcomes that are merely probable.
Let's return now to the Pseudocertainty Effect. Consider a two-stage decision process. In the first stage we must choose between two options, Opt11 and Opt12, that produce different outcomes, O11 and O12, with probabilities P11 and P12 respectively. Similarly, the second stage also has two options, Opt21 and Opt22 that produce different outcomes, O21 and O22, with probabilities P21 and P22 respectively. However, in this scenario, we reach this second stage only if we choose Option Opt11, and we succeed in achieving Outcome O11. Since the probability of Outcome O11 is only P11, reaching the second stage isn't a sure thing.
Now it gets a little tricky.
In the second stage, the respective probabilities are P21 and P22. If P21 is 100%, that is, if the probability of Option Opt21 producing Outcome O21 is 100%, then the Certainty Effect would tend to cause us to overvalue O21.
But now, because we're in a two-stage decision process, the actual probability of Outcome O21 isn't 100%. It's only P11*P21, assuming that the option outcome distributions are probabilistically independent of each other. But Kahneman and Tversky demonstrated experimentally that people tend to overvalue Outcome O21 in a manner analogous to how they would have treated it if it actually were certain, that is, if P11*P21 were 100%. Hence the term Pseudocertainty Effect. That is, people tend to disregard the fact that the first stage of this two-stage scenario imposes a probability distribution that affects the final outcome. Instead, people focus only on the final stage of the two-stage scenario.
The experimental results suggest that people tend to "assume away" the uncertainties of the first stage of a two-stage decision string, and choose options only on the basis of the final stage. Or, at least, they give too much weight to the uncertainties of the final stage. This is the essence of the Pseudocertainty Effect.
Presumably, this phenomenon also applies to multi-stage scenarios, and to scenarios in which the probability distributions of the various stages aren't entirely independent.
Synergistic effects of Pseudocertainty Effect and Motivated Reasoning
The Pseudocertainty Effect has implications for workplace decision making in the context of motivated reasoning. Either phenomenon, acting alone, can be costly. But when both are acting they display a synergy that can be especially pernicious. For example, consider risk management.
In a typical risk management problem, we identify five attributes of a risk: the risk event, its probability, its impact, response if it materializes, and a mitigation strategy. What makes risk analysis so interesting is that some risks cannot materialize unless other risks materialize first, forming a "risk string." For example, some neighborhoods in Houston, Texas, can flood only if (a) a hurricane passes over the area and (b) a dam fails as a result of rainfall so extreme that the dam cannot withstand the pressure of the accumulated rainfall. These two events combine to provide an example of a risk string of length 2. Longer strings are clearly possible.
Because analyzing risk strings inherently produces staged decision strings, risk strings provide a setting in which the Pseudocertainty Effect can take hold. But the effect will be even more significant if we can identify a preferred outcome. That is, if we're also at risk of engaging in motivated reasoning, then the Pseudocertainty Effect can cause some real trouble.
For example, consider a risk A' that can materialize only if risk A materializes and is successfully addressed by OptA, one of the options defined for risk A. The decisions regarding how much to invest in mitigating either A or A' (or both) satisfy the structure Kahneman and Tversky studied in their research into the Pseudocertainty Effect. Moreover, in risk analysis, decision makers have a clear preference for investing a minimum amount in risk mitigation. They are thus at risk of engaging in motivated reasoning to justify low-cost options for risk management.
Probably the most dangerous case occurs when OptA', one of the options for dealing with risk A', is very low cost but nearly certain to work. Risk managers will be tempted to implement that option, despite the fact that the probability of that option succeeding is also dependent on the success of the Option OptA. Because of the Pseudocertainty Effect, risk managers will tend to ignore the probabilities associated with Risk A. That will lead them to over-invest in preparing for OptA', which they regard as leading to a favored outcome, because it is low cost.
An example from history

The battleship USS Arizona, burning during the Japanese attack on the U.S. naval base at Pearl Harbor, Hawaii, December 7, 1941. Image source: National Archives and Records Administration, courtesy Wikipedia.
The strategic decision to defend against sabotage instead of air attack can be regarded as the first stage in one of Kahneman and Tversky's experiments. A second stage could be the deployment of manpower around the island. Personnel that would be needed for air defense on the day of the attack were either off duty or standing guard at facilities at some remove from the airfield. These decision strings would have produced a successful defense against sabotage, but as we now know, they produced an unsuccessful defense against air attack. The outcome suggests that the Pseudocertainty Effect might have played a role.
Last words
In situations that involve risk strings and motivated reasoning, trusting to intuition is likely to run afoul of the Pseudocertainty Effect. Careful mathematical analysis of all options under consideration offers a path with a minimum of exposure to the Pseudocertainty Effect. First issue in this series
Top
Next Issue
Is every other day a tense, anxious, angry misery as you watch people around you, who couldn't even think their way through a game of Jacks, win at workplace politics and steal the credit and glory for just about everyone's best work including yours? Read 303 Secrets of Workplace Politics, filled with tips and techniques for succeeding in workplace politics. More info
Footnotes
Your comments are welcome
Would you like to see your comments posted here? rbrenjTnUayrCbSnnEcYfner@ChacdcYpBKAaMJgMalFXoCanyon.comSend me your comments by email, or by Web form.About Point Lookout
Thank you for reading this article. I hope you enjoyed it and
found it useful, and that you'll consider recommending it to a friend.
This article in its entirety was written by a human being. No machine intelligence was involved in any way.
Point Lookout is a free weekly email newsletter. Browse the archive of past issues. Subscribe for free.
Support Point Lookout by joining the Friends of Point Lookout, as an individual or as an organization.
Do you face a complex interpersonal situation? Send it in, anonymously if you like, and I'll give you my two cents.
Related articles
More articles on Cognitive Biases at Work:
Workplace Politics and Social Exclusion: II
- In workplace politics, social exclusion can be based on the professional role of the target, the organizational
role of the target, or personal attributes of the target. Each kind has its own effects. Each requires
specific responses.
The Planning Fallacy and Self-Interest
- A well-known cognitive bias, the planning fallacy, accounts for many unrealistic estimates of project
cost and schedule. Overruns are common. But another cognitive bias, and organizational politics, combine
with the planning fallacy to make a bad situation even worse.
Motivated Reasoning
- When we prefer a certain outcome of a decision process, we risk falling into a pattern of motivated
reasoning. That can cause us to gather data and construct arguments that erroneously lead to the
outcome we prefer, often outside our awareness. And it can happen even when the outcome we prefer is
known to threaten our safety and security.
Remote Hires: Inquiry
- When knowledge workers join organizations as remote hires, they must learn what's expected of them and
how it fits with what everyone else is doing. This can be difficult when everyone is remote. A systematic
knowledge-based inquiry procedure can help.
Additive bias…or Not: II
- Additive bias is a cognitive bias that many believe contributes to bloat of commercial products. When
we change products to make them more capable, additive bias might not play a role, because economic
considerations sometimes favor additive approaches.
See also Cognitive Biases at Work and Cognitive Biases at Work for more related articles.
Forthcoming issues of Point Lookout
Coming February 26: Devious Political Tactics: Bad Decisions
- When workplace politics influences the exchanges that lead to important organizational decisions, we sometimes make decisions for reasons other than the best interests of the organization. Recognizing these tactics can limit the risk of bad decisions. Available here and by RSS on February 26.
And on March 5: On Begging the Question
- Some of our most expensive wrong decisions have come about because we've tricked ourselves as we debated our options. The tricks sometimes arise from rhetorical fallacies that tangle our thinking. One of the trickiest is called Begging the Question. Available here and by RSS on March 5.
Coaching services
I offer email and telephone coaching at both corporate and individual rates. Contact Rick for details at rbrenjTnUayrCbSnnEcYfner@ChacdcYpBKAaMJgMalFXoCanyon.com or (650) 787-6475, or toll-free in the continental US at (866) 378-5470.
Get the ebook!
Past issues of Point Lookout are available in six ebooks:
- Get 2001-2 in Geese Don't Land on Twigs (PDF, )
- Get 2003-4 in Why Dogs Wag (PDF, )
- Get 2005-6 in Loopy Things We Do (PDF, )
- Get 2007-8 in Things We Believe That Maybe Aren't So True (PDF, )
- Get 2009-10 in The Questions Not Asked (PDF, )
- Get all of the first twelve years (2001-2012) in The Collected Issues of Point Lookout (PDF, )
Are you a writer, editor or publisher on deadline? Are you looking for an article that will get people talking and get compliments flying your way? You can have 500-1000 words in your inbox in one hour. License any article from this Web site. More info
Follow Rick





Recommend this issue to a friend
Send an email message to a friend
rbrenjTnUayrCbSnnEcYfner@ChacdcYpBKAaMJgMalFXoCanyon.comSend a message to Rick
A Tip A Day feed
Point Lookout weekly feed
