I get this question a lot.  Sometimes from people who want to think it’s some kind of magic that doesn’t require rigorous analytical thinking, usually from people who want to prove that it’s a science.  Often their desire for proof is more than philosophical.  If we are suggesting major price moves that will have real consequences on the business, they want to have as much comfort as they can in their decisions.

We want to think about pricing as scientifically as possible.  This makes executives more comfortable (and us, for that matter).  It also leads to better decisions.  Most of the price moves we recommend fall into the “low hanging fruit” category.  They are not rocket science.  This means things like “stop selling deals at negative contribution margins.”  Or “stop offering free express shipping for goods that weigh several hundred pounds.”  Even with seemingly incontrovertible suggestions like this, people want data.  Like, “how do we know that we are actually losing money on these deals?  Could we just have funny accounting?”  Or, “if we take away these deals, and we lose the customers, what happens to our overall margins?”  (Reasonable question, but typically these situations are not deliberate “loss leader” tactics– they’re just a matter of things sliding out of bounds.) In these cases, you don’t need a lot of art or science, just good analytics.  (Yes, that’s what we sell.)

More complex scenarios involve assumptions about what might happen given a certain price move, perhaps in conjunction with competitive or market changes.  Here, the ability to look at what happens on a fine-grained basis is extremely powerful.  Rather than dealing with averages in a spreadsheet, you can apply the model at the level individual accounts and transactions and roll up the results.  This often gives very different, and much more reliable forecasts than working with the averages.  Still, there is a certain amount of art involved in deciding the parameters of the model, since the selection and exclusion of data points has a big impact on the results.

Here, the political setting has as much importance as the mathematics.  If people perceive that a pricing adjustment is about assigning blame for past behavior, their natural response will be to divert or diffuse blame, often by attempting to explain why certain pricing actions were good, rather than asking whether they were good.  On the other hand, if the environment rewards people and teams working together to find opportunities going forward, without assigning blame for what has already happened, people are more willing to look at whether better actions would lead to better results.  The math, economics, and analytics can be identical in the two situations, but an opportunity-focused organization will get much better results than a blame-focused organization.

Even assuming that you are in an opportunity-focused organization where everyone is trying to be objective, you can still run into issues of selection bias.  Ironically, some of this bias is enabled by the capabilities of the very software that’s supposed to provide fact-based analysis.  Powerful, flexible software that lets you easily exclude certain data points and run scenarios or flag deals much more effectively than Excel is handy, but you can always find another set of scenarios to run or ways to look at the data.  We’ve found some organizations get so excited about having better visibility into pricing performance that they get sucked into analysis paralysis, at least temporarily.

In these situations, there’s a lot of pressure up and down the chain of command to come up with solid, statistically valid decisions.  Which makes perfect sense.  But you can now crunch numbers in so many ways that you can, if you want, create almost any scenario.

Indeed, the same situation has happened in medical trials.  Computing power lets companies essentially run lots of experiments in parallel, and cherry-pick the results they want to see.  (Check out this great article on Ars Technica, We’re so good at medical studies that most of them are wrong.)  I often tell people “we’re running a business, not an FDA study.  We need to make a decision by Friday, so we have to go with the best information we have.” Then if we’re doing consulting work, we usually have to run the numbers one more time, with another set of assumptions, until the executive in charge makes the decision to go forward.   Turns out, even the FDA studies have similar problems.

What does this mean?  That we should abandon hope of having a solid mathematical foundation for pricing decisions?  Certainly not.  Just that we can’t ever get to certainty.  But with some decent analytics we can do a lot better.  And that’s all we need.  We don’t need perfection, or even “optimization.”  We just need 1% better.

So is pricing an art?  A science?

Yes.

One Comment

  1. Rags Srinivasan

    Practicing it as science allows us to find fault with the last best thing we had and improve on it. How can you improve on art – you can create new artifacts but are they better than the previous ones? I think a vast majority of pricing is science – deciding whether to end with 99 or 69 is art.

    Regarding making decisions, I cannot help but quote Michael Lewis’s column on relying on numbers to make decisions:
    “The numbers either refute my thinking or support my thinking and when there’s any question, I trust the numbers. The numbers don’t lie. It’s a subtle difference, but it has big implications. If you have an intuition of something but no hard evidence to back it up, you might kind of sort of go about putting that intuition into practice, because there’s still some uncertainty if it’s right or wrong.”

    -rags
    twitter: @pricingright

Comments are closed.