Experimental Economics: Finding Millions of Dollars in the Haystack

Wired recently ran a story on Experimental Economists, who model complex scenarios and attempt to optimize outcomes for large companies and government agencies. My reaction to the story was “wow– that’s what Mimiran does all the time, we just didn’t have such a cool name for it.”

Now let’s walk through how many companies make pricing decisions and how enhanced modeling can help drive better results and provide metrics to gauge pricing success.

Making Decisions without Data
A lot of companies still make pricing decisions with their proverbial gut, rather than using data-driven approaches. This may work well for comedian Steven Colbert, but it can be costly when making pricing decisions. People make pricing decisions without data because:

  • They lack adequate data.
  • They lack the means of handling data effectively and turning it into useful information.
  • Despite the financial ramifications of pricing decisions, many organizations are rushing to make a decision and do not have the weeks or even months required to make a fully considered decision.

Limitations of Traditional Pricing Decision Models
Rather than going strictly with the gut, most companies attempt to predict the outcome of pricing moves using simple models, often run in spreadsheets. This approach has the benefit of applying some data to a problem and usually results in better decisions than the gut. However, spreadsheet modeling suffers from a serious limitation. Spreadsheet models run on averages, often imported from financial reporting systems. The “average” response to a pricing move is different than the response of the “average customer”.

For example, a financial services company wanted to make some pricing changes to bring them closer to competitors’ price points and price positioning. Spreadsheet simulations indicated that the move was feasible and desirable. The average customer would not experience too great a change. Unfortunately, this model had no way of showing, or even knowing, that the “average” customer came from aggregating a wide spectrum of individuals. The average of all of these customers seemed like it would react reasonably well to the proposed pricing change. Looking at the customers as individuals, and then aggregating their responses gave a very different view. High end customers would experience an awful lot of consumer surplus, while low end customers would have to take too big a price increase.

Experimental Economics
Experimental economics allows companies to model the possible outcomes of a pricing decision at the level of an individual customer, then aggregate those results to product an expected outcome.

Nuanced data models allow prediction of customer behavior based on customer benefit. In the financial services example, we were able to show that certain pricing discounts had a far lesser impact on customer behavior than first predicted. We were also able to show that certain pricing incentives and bundles encouraged customers to consume services in a way that was unprofitable for the firm. By rearranging some of these bundles, we helped the customer achieve positive return on investment in the project before we had even generated final recommendations.

Another example involves a high tech manufacturer looking to take advantage of list pricing opportunities and tighten discounting variance in the field. Preliminary analysis showed an opportunity of over $20M, which provided justification to bring in external expertise. More detailed modeling that we performed, however, revealed several obstacles to achieving the $20M savings. First, list price opportunities were impossible to capture reliably because of the maze of contracts and negotiated discounts. The first look at the list pricing opportunities suggested that list price changes correlated strongly with actual prices. When we looked deeper, however, we found an almost random scatter of relationships between list price changes and final price outcomes. The average outcome may have born some resemblance to the change in inputs, but it was as likely to be caused by general market forces and inflation as pricing moves, despite a huge effort put into the annual price review process. Using this information, we could focus the pricing team more productively on managing the final price. Here we saw that many underperforming customers targeted for margin improvement were under contract and would not see price changes for some time. Some others bought primarily low margin products. The problem for these customers was not the pricing, but the product mix. We were also able to find customers getting discounts far beyond expected ranges for premium products. Focusing on these customers, there was an immediate, actionable $3M opportunity. How does turning a $20M opportunity into a $3M opportunity constitute success? First, the larger opportunity was not actionable. Without more granular detail, there was no way to capture it. Secondly, it was not actually there. There’s a pricing joke “what is the difference between getting a promotion for capturing a $3M pricing opportunity and getting fired for capturing a $3M opportunity? Promising $20M and promising $2M.”

Merging behavioral data and pricing data can provide even more insight. For example, looking at pricing and usage data for a software company helped us design a pricing model for a new edition. We were able to assess at the level of an individual customer who might downgrade from a more expensive edition, and whose usage patterns would justify higher price points than the company had originally intuited. This led to appropriate fencing between editions and pricing that addressed both “casual” and “hardcore” users separately. The company could provide great value at a good price to the hardcore users and good value at a great price for the casual users. Aggregate information would have led to a single price point which would have left money on the table for the hardcore users and still not provided compelling value for the casual users.

Many of these models depend on inputs whose values are not exactly knowable. However, detailed models help you infer ranges for inputs, such as the percentage of customers who leave over a price increase versus the percentage who opt for cheaper substitutable product. In addition, the models can run with a range of inputs, based on customer feedback, surveys, market data, and expert opinion. Any one of these sources could be wrong, but by combining multiple sources, we create a distribution of inputs and a distribution of outputs, which leads to worst-, best- and likely-case scenarios.

Putting Theory into Practice
While the benefits of detailed experimental economics are compelling, few customers undertake such activities. There are several reasons for this, foremost among them a lack of awareness that such capabilities exist (hopefully writing this piece will help). Many companies also lack time, data, expertise, and money. As the Wired article notes, developing these capabilities can require 6- and 7-figure annual investments. However, we have been able to deliver these projects in many cases under $100K. In addition, our software not only provides the models, it also tracks performance against the predictions of the model, enabling you to tweak pricing if needed, and providing built-in proof of value. Plus, creating these models is a lot of fun and is certainly an eye-opening experience as our customers get a much deeper view into the performance of their businesses.

One Comment

  1. Anonymous

    Here is just one quick and current example of how data can inform one’s approach to better decision making (i.e. – timing) http://venturebeat.com/2008/07/14/iphone-advertising-app-maker-medialets-publishes-early-app-store-metrics/

    Have a great day Reuben,

Comments are closed.