Few decisions in business or in life are without risk. The outcomes we expect from our decisions depend in some part on a set of assumptions about the world around us. For big decisions with a large impact, we often attempt to gauge the risk involved. This can give us not a single expected outcome, but a range of outcomes with different probabilities.
For example, if we raise prices 5%, we can attempt to measure the risk of decreased demand. In some cases, we can derive a mathematical view of price elasticity, but in many cases this is either impossible or more of an exercise in gaming the numbers than useful statistical analysis. Most pricing decisions are made in the latter circumstances. We still need to make decisions– after all, not making a pricing move carries risk, too. We’d like to have some idea of the risks involved, but we may have very little real data.
There’s a great article in the New York Times Magazine on Risk Management in the financial sector, arguing that overly simplified ways of looking at “Value at Risk” or VAR may have contributed to the current meltdown. In short, back in the early 90s, analysts at JP Morgan developed a methodology for measuring how much risk was involved in certain trades up to a certain expected probability. This VAR metric became the standard for judging risk across Wall Street. The beauty and the curse of VAR was that it wrapped all the risk factors up into a single number. Management could even aggregate VAR from different parts of the company into a global view of value at risk.
For a group of people who are well versed in the notion that proxies do not directly represent underlying systems, Wall Street bought into VAR. When everyone was making money, no one seemed to mind that the numbers could be fudged, that there is no way to keep track of all possible risks, and that if you run a system long enough under conditions of 99% certainty, sooner or later you run into the other 1%.
(Perhaps more intriguing than the debate over the merits of VAR is the larger systems issue. Whether or not VAR oversimplifies risks, in a surging economy, firms that take large, risky positions often outperform those who adhere to more prudent strategies. Money, acclaim, and talent flow in that direction. The safety net implied by the rescue of Long Term Capital Management, which destroyed itself in part by relying too heavily on computerized risk models, further encourages risks.)
While this type of risk assessment has its drawbacks, it’s a lot more useful than no risk assessment at all, if used properly as a tool rather than a crutch. And it’s certainly more information than most companies have when making pricing decisions.
In the absence of strong risk models or data to fully support predictive analytics, it can be helpful to develop scenarios run with different assumptions at different levels of probability. For example, the company contemplating the 5% price increase might expect to see accelerated growth, steady growth, flat sales, a small decline in sales, or a large decline in sales depending on the circumstances. We can guesstimate best, likely, and worst-case scenarios. (This often shows that better pricing discipline with low margin customers has a positive impact even in the worst-case scenario, providing organizational support for narrowing price bands.)
Naturally, this type of modeling is best used as a tool, not a crutch. In one meeting with a manufacturer early in 2008, a senior executive stated that they expected strong commodity prices to continue for several years. They wanted to move quickly to capture that opportunity and not waste time building flexibility into tools and processes to adjust for declining commodity prices. Whether or not subordinates felt the same way, they did not express disagreement. We raised the issue but this was perceived a way for us to charge more without providing any more value and the idea was rejected. Although we tried to do the right thing, we failed to effectively convince the customer and now they are suffering under declining commodity prices.
Another important point to keep in mind is that if you are trying to determine risk for a pricing move, it’s critical to have low-level data. For example, if the average price yield is 80% of list price and you want to move the yield to 82%, you will run into all kinds of trouble if treat customers as a single entity and simply move the average price up and then assign a defection likelihood. You have to look at each customer individually, and potentially move some customers more than others, and examine customers’ defection chances individually. This may not be possible to do at the level of detail we would like to be completely certain, but it can be done using attributes of the customer to give a much closer approximation of likely behavior.
We worked with one company that was making adjustments to its product portfolio that involved, among other things, introducing new products. While the goal of the new product was to encourage people to trade-up from a less expensive offering, there was also the threat of people trading down from higher-priced offerings. At first the “value at risk” seemed enormous– too great to justify the modest expansion in the market from the new offering. However, deeper analysis revealed that while the theoretical maximum number of potential downgraders was huge, practically speaking only a small fraction would even really consider it. Running through various scenarios showed that through careful pricing of the new product, we could almost guarantee a positive overall impact. (Different prices optimized the best, likely, and worst-case scenarios.) This was more than enough information for the CEO to greenlight the new product, which proceeded to have positive bottom-line impact. Part of the reason this exercise was successful was the executive team was not simply looking at a number for comfort, but actively debated the numbers and the methodology behind them until everyone had a good “feel” for what the numbers meant and some of the risks involved.
If you are contemplating price changes in 2009, whether in list prices, contracts, or discounts, doing a risk assessment can help you find the “sweet spots” and avoid negative outcomes.