Goodhart's Law isn't a law in the legal or scientific sense. Charles Goodhart put it this way in a 1975 paper: "Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes." [Goodhart 1975] More plainly, when we express an organizational goal in terms of a metric, the metric loses its value as a measure of anything. Although Goodhart's interest was managing national economies, his observation turns out to apply also to managing organizations.
Goodhart's insight has only grown in importance since 1975, because metrics have become so central to the practice of managing organizations. Indeed, a search at Google for "managing by metrics" (without the quotes) returned 1.1 billion hits. By comparison, searching for "finding love" yields only about four times as many hits.
As an When we express an organizational goal
in terms of a metric, the metric loses its
value as a measure of anythingexample of the implications of Goodhart's Law, suppose we measure the performance of our project management teams by tracking their budget overruns and schedule overruns. And suppose we use this data when we make decisions about promotions, salaries, and bonuses. Goodhart's Law implies that the value of this data as a measure of performance will gradually decline over time, and ultimately collapse. In this example, we would find that project planners would become so accurate in their projections of cost and schedule as to cast doubt on the measurements, because only the clairvoyant could possibly be so accurate.
Those who use metrics-based approaches for managing their organizations would do well to consider to what degree there is risk that Goodhart's Law might apply. To assess that risk, examine the factors that could make the metrics data misleading. One factor is what psychologists call the reification error.
The reification error
To reify, in its psychological sense, is to regard — and treat — an abstract entity as if it were a physical, concrete entity. [Levy 1997] [Brenner 2011] For example, after releasing a bowling ball, with the ball still rolling down the lane, a bowler might reach out a hand as if to push the ball more toward the "sweet spot" so as to increase the number of pins the ball can knock down. The bowler knows that releasing the ball marks the end of the bowler's influence. Still, it's satisfying to pretend otherwise. For many bowlers, this behavior is no pretense.
In the case of metrics, we engage in reification when we assume that we can "measure" something that has no physical manifestation. For example, it's impossible to measure the value of a software engineer's daily output. We can measure the time the engineer spent on a task, because we can measure time. We can measure the number of lines of code produced, because we can count. But the value of the engineer's output isn't measurable until we've subjected it to quality tests, the most basic of which is, "Does it work?" Even after testing, we might not know how adaptable the code is, how maintainable it is, or how difficult it will be for successive engineers to understand how it works. These factors, some of which are known as "ilities" or non-functional requirements (NFRs), aren't directly measurable.
NFRs can't be measured by examining the engineer's output, and some NFRs can't actually be measured at all. To presume that we can measure something as immeasurable as engineering output quality, reducing it to a number or to a set of numbers, is to commit the reification error. What we can measure are proxies for some aspects of some NFRs. But then the reification error leads us to believe that the proxy is the principal — that the measurement is equivalent to the attribute it supposedly stands for. In many situations the two are not equivalent.
How reification can undermine metrics
Reification in itself cannot compromise the utility of a metric as effectively as it can when the people involved in the processes being measured are aware of the goal value of the metric. In that case, awareness of the goal and the reification error conspire. Here's one illustration of this dynamic.
Consider a metric that purports to represent an organizational attribute that's subject to the reification error. Like engineering productivity, such attributes are necessarily abstractions. If the metric goal is widely known, the temptation to adjust the measurement protocol so as to produce favorable results can be extreme. And since the attribute has no physical manifestation, the variety of possible protocol adjustments is limited only by inventiveness. For example, someone might point out that because the existing protocol "overlooks important phenomena," we must adjust it to gain a more reliable prediction of future performance. Over time, following a stream of adjustments like that, the measurement protocol no longer produces data representative of anything.
Numeric data carries with it the air of objectivity. Because we are accustomed to respecting the reality of numeric measurements of physical entities, we sometimes overlook the flexibility of the relationship between numeric data and the abstractions that data supposedly represents. It is that flexibility that can create a risk of disaster when we manage an organization according to numeric data whose goal values are widely known.
That flexibility has many sources. Next time we'll explore how people engage in "gaming" the metrics — another mechanism that can undermine metrics when the metric goal values are widely known. Next in this series Top Next Issue
Are your projects always (or almost always) late and over budget? Are your project teams plagued by turnover, burnout, and high defect rates? Turn your culture around. Read 52 Tips for Leaders of Project-Oriented Organizations, filled with tips and techniques for organizational leaders. Order Now!
Your comments are welcomeWould you like to see your comments posted here? rbrenZLkFdSHmlHvCaSsuner@ChacbnsTPttsdDaRAswloCanyon.comSend me your comments by email, or by Web form.
About Point Lookout
Thank you for reading this article. I hope you enjoyed it and found it useful, and that you'll consider recommending it to a friend.
Point Lookout is a free weekly email newsletter. Browse the archive of past issues. Subscribe for free.
Support Point Lookout by joining the Friends of Point Lookout, as an individual or as an organization.
Do you face a complex interpersonal situation? Send it in, anonymously if you like, and I'll give you my two cents.
More articles on Personal, Team, and Organizational Effectiveness:
- Changing the Subject: II
- Sometimes, in conversation, we must change the subject, but we also do it to dominate, manipulate, or
assert power. Subject changing — and controlling its use — can be important political skills.
- Guidelines for Sharing "Resources"
- Often, team members belong to several different teams. The leaders of teams whose members have divided
responsibilities must sometimes contend with each other for the efforts and energies of the people they
share. Here are some suggestions for sharing people effectively.
- The Deck Chairs of the Titanic: Obvious Waste
- Among the most futile and irrelevant actions ever taken in crisis is rearranging the deck chairs of
the Titanic, which, of course, never actually happened. But in the workplace, we engage in
activities just as futile and irrelevant, often outside our awareness. Recognition is the first step
- Performance Issues for Nonsupervisors
- If, in part of your job, you're a nonsupervisory leader, such as a team lead or a project manager, you
face special challenges when dealing with performance issues. Here are some guidelines for nonsupervisors.
- Paradoxical Policies: I
- Although most organizational policies are constructive, many are outdated or nonsensical, and some are
actually counterproductive. Here's a collection of policies that would be funny if they weren't real.
See also Personal, Team, and Organizational Effectiveness and Problem Solving and Creativity for more related articles.
Forthcoming issues of Point Lookout
- Coming March 29: Time Slot Recycling: The Risks
- When we can't begin a meeting because some people haven't arrived, we sometimes cancel the meeting and hold a different one, with the people who are in attendance. It might seem like a good way to avoid wasting time, but there are risks. Available here and by RSS on March 29.
- And on April 5: The Fallacy of Division
- Errors of reasoning are pervasive in everyday thought in most organizations. One of the more common errors is called the Fallacy of Division, in which we assume that attributes of a class apply to all members of that class. It leads to ridiculous results. Available here and by RSS on April 5.
I offer email and telephone coaching at both corporate and individual rates. Contact Rick for details at rbrenZLkFdSHmlHvCaSsuner@ChacbnsTPttsdDaRAswloCanyon.com or (650) 787-6475, or toll-free in the continental US at (866) 378-5470.
Get the ebook!
Past issues of Point Lookout are available in six ebooks:
- Get 2001-2 in Geese Don't Land on Twigs (PDF, )
- Get 2003-4 in Why Dogs Wag (PDF, )
- Get 2005-6 in Loopy Things We Do (PDF, )
- Get 2007-8 in Things We Believe That Maybe Aren't So True (PDF, )
- Get 2009-10 in The Questions Not Asked (PDF, )
- Get all of the first twelve years (2001-2012) in The Collected Issues of Point Lookout (PDF, )
Are you a writer, editor or publisher on deadline? Are you looking for an article that will get people talking and get compliments flying your way? You can have 500-1000 words in your inbox in one hour. License any article from this Web site. More info