Scientists talk about “correlation vs causation”. Correlation means that they see a data trend but they can’t prove that one caused the other. For example, there are a lot of storks around while human babies are being born. Causation is a proven connection between two events.
When we track metrics in business, I notice that people often hesitate to use a metric if they can’t prove causation. Please know that your metrics for the purpose of knowing if you’re moving in the right direction, metrics are not a science experiment. (Unless you are doing a science experiment, but that’s not the scope of this blog post).
Metrics tell you where to look deeper. Metrics are indicators of where to look. When I see teams trying to create an exhaustive dashboard of metrics, I see exhausted executives reading them. You should need no more than 5 metrics. When one of those metrics raises a flag, you go dig in and look at the 5 metrics behind it. Looking at 50 different metrics at once will actually have the opposite effect and cause the reader to miss key indicators.
No indexes. And if by 5 metrics you think I mean, “Take your 50 and roll them into 5 indexes”, the answer is no. Indexes tend to water down the data and it’s really hard to tell when there’s a red flag.
Metrics prove or disprove a hypothesis. Teams often use metrics to prove or disprove a hypothesis. “We believe this product launch will increase market share.” When teams track market share they get concerned that perhaps other factors affected the market share, and it wasn’t really a result of their work. Here’s where you need to do a reasonable job of proving correlation, without getting too hung up on causation. If you’ve scoped your market enough, you can compare your gains to competitors, and you have reasonable information that there weren’t other big initiatives affecting this metric, go ahead and call it a win. Of course, the opposite is also true.
How have you kept your metrics from getting too scientific?