I love metrics. I live for metrics. Every week and every quarter I drown my team in metrics reviews. Why? Because metrics are the instrumentation — the flight panel — of our business. Good metrics provide clear insights. They cut through politics, spin, and haze. They spark amazing debates. They help you understand your business and compare it to others.
I love metrics, but I’ll never be a slave to them. Far too often in business I see people who are metrics slaves. Instead of mastering metrics to optimize the business, the metrics become the master and the manager a slave.
I define metrics slavery as the case when managers stop thinking and work blindly towards achieving a metric regardless of whether they believe doing so leads to what they consider is best for the business.
One great thing about sports analytics is that despite an amazing slew of metrics, everyone remembers it’s the team with the most goals that wins, not the one who took the most shots. In business, we often get that wrong in both subtle and not-so-subtle ways.
Here are metrics mistakes that often lead to metrics slavery.
- Dysfunctional compensation plans, where managers actively and openly work on what they believe are the wrong priorities in response to a compensation plan that drives them to do so. The more coin-operated the type of people in a department, the more carefully you must define incentives. While strategic marketers might challenge a poorly aligned compensation plan, most salespeople will simply behave exactly as dictated by the compensation plan. Be careful what you ask for, because you will often get it.
- Poor metric selection. Marketers who count leads instead of opportunities are counting shots instead of goals. I can’t stand to see tradeshow teams giving away valuable items so they can run the card of every passing attendee. They might feel great about getting 500 leads by the end of the day, but if 200 are people who will never buy, then they are not only useless but actually have negative value because the company’s nurture machine is going to invest fruitless effort in converting them.
- Lack of leading indicators. Most managers are more comfortable with solid lagging indicators than they are with squishier leading indicators. For example, you might argue that leads are a great leading indicator of sales, and you’d be right to the extent that they are good leads. This then requires you to define “good,” which is typically done using some ABC-style scoring system. But because the scoring system is complex, subjective, and requires iteration and regression to define, some managers find the whole thing too squishy and say “let’s just count leads.” That’s the equivalent of counting shots, including shots off-goal that never could have scored. While leading indicators require a great deal of thought to get right, you must include them in your key metrics, lest you create a company of backwards-looking managers.
- Poorly-defined metrics. The plus/minus metric in hockey is one of my favorite sports metrics because it measures teamwork, something I’d argue is pretty hard to measure . However, there is a known problem with the plus/minus rating. It includes time spent on power plays  and penalty kills . Among other problems, this unfairly penalizes defenders on the penalty-killing unit, diluting the value of the metric. Yet, far as I know, no one has fixed this problem. So while it’s tracked, people don’t take it too seriously because of its known limitations. Do you have metrics like this at your company? If so, fix them.
- Self-fulfilling metrics. These are potential leading metrics where management losses sight of the point and accidentally makes their value a self-fulfilling prophecy. Pipeline coverage (value of oppties in the pipeline / plan) is such a metric. Long ago, it was good leading indicator of plan attainment, but over the past decade literally every sales organization I know has institutionalized beating salespeople unless they have 3x coverage. What’s happened? Today, everyone has 3x coverage. It just doesn’t mean anything anymore. See this post for a long rant on this topic.
- Ill-defined metrics, which happen a lot in benchmarking where we try to compare, for example, our churn rate to an industry average. If you are going to make such comparisons, you must begin with clear definitions or else you are simply counting angels on pinheads. See this post where I give an example where, off the same data, I can calculate a renewals rate of 69%, 80%, 100%, 103%, 120%, 208%, or 310%, depending on how you choose to calculate. If you want to do a meaningful benchmark, you better be comparing the 80% to the 80%, not the 208%.
- Blind benchmarking. The strategic mistake that managers make in benchmarking is that they try to converge blindly to the industry average. This reminds me of the Vonnegut short-story where ballerinas have to wear sash-weights and the intelligentsia have music blasted into their ears in order to make everyone equal. Benchmarks should be tools of understanding, not instruments of oppression. In addition, remember that benchmarks definitionally blend industry participants with different strategies. One company may heavily invest in R&D in product-leadership strategy. One may heavily invest in S&M as part of market-share leadership strategy. A third may invest heavily in supply chain optimization as part of cost-leadership strategy. Aspiring to the average of these companies is a recipe for failure, not success, as you will end up in a strategic No Man’s Land. In my opinion, this is the most dangerous form of metrics slavery because it happens at the boardroom level, and often with little debate.
- Conflicting metrics. Let’s take a concrete example here. Imagine you are running a SaaS business that’s in a turnaround. This year bookings growth was flat. Next year you want to grow bookings 100%. In addition, you want to converge your P&L over time to an industry average of S&M expenses at 50% of revenues, whereas today you are running at 90%. While that may sound reasonable it’s actually a mathematical impossibility. Why? Because the company is changing trajectories and in a SaaS business revenues lag bookings by a year. So next year revenue will be growing slowly  and that means you need to grow S&M even slower if you want to meet the P&L convergence goal. But if you want to meet the 100% bookings growth goal, with improving efficiency, you’ll need to increase S&M cost by say 70%. It’s impossible. #QED. There will always be a tendency to split the difference in such scenarios but that is a mistake. The question is which is the better metric off which to anchor? The answer, in a SaaS business is bookings. Ergo, the correct answer is not to split the difference (which will put the bookings goal at risk) but to recognize that bookings is the better metric and anchor S&M expense to bookings growth. This requires a deep understanding of the metrics you use and the courage to confront two conflicting rules of conventional wisdom in so doing.
In the end, metrics slavery, while all too common, is more about the people than the metrics. Managers need to be challenged to understand metrics. Managers need to be empowered to define new and better metrics. Managers must to be told to use their brains at all times and never do something simply to move a metric.
If you’re always thinking critically, you’ll never be a metrics slave. The day you stop, you’ll become one.
# # #
 The way it works is simple: if you’re on the ice when your team scores, you get +1. If you’re on the ice when the opponent scores you get -1. When you look at someone’s plus/minus rating over time, you can see, for example, which forwards hustle back on defense and which don’t.
 When, thanks to an opponent’s penalty you have more players on the ice then they do.
 When, thanks to your team’s penalty, your opponent has more players on the ice than you do.
 Because bookings grew slowly this year
(Cross-posted @ Kellblog)