“The king is dead. Long live the king.” was coined at the death of Charles VI of France. He started his reign with the nickname *The Beloved* and ended it with the moniker *The Mad* due to his frequent bouts of insanity. When Charles VI’s coffin descended into the vault of Saint Denis Basilica, the now famous decry above was uttered to usher in the next King, the son of Charles VI, Charles VII. Charles Jr doesn’t sound very regal, does it?

The saying has an elegance to it. It basically states: “Chill folks! Yes, things have changed but you can count on the same things working as they always have.”

What if the same thing needs to be true about Innovation Accounting? Yes, Innovation Accounting itself is relatively new. It was coined by Eric Ries in The Lean Startup. Therefore, didn’t we just declare: “Traditional Accounting is dead. Long live Innovation Accounting.” when managing innovation?

To answer that question, let’s first examine what Innovation Accounting is. I like the definition in The Corporate Startup. It states that Innovation Accounting is about managing:

- Product investment decisions at different points in its innovation journey while ensuring appropriate amounts are invested
- The success of innovation projects to allow decisions to invest in one product over another
- The impact that innovation is having on the business as a whole

Dan Toma and Tendayi Viki wrote an excellent post about it here.

# The Way Things Currently Work

The way that most large organizations make these decisions is via an exhaustive business case. The typical business case attempts to provide justification for investment in a product or project by providing evidence for a return on that investment. This evidence is traditionally forecast up to 5 to 10 years in the future using various methods intended to de-risk the decision including Net Present Value, Sensitivity Analysis, Payback Period, Contingency Planning, and Best/Worst Case Analysis (see here for my in-depth look at the pitfalls of best vs worst case analysis).

The issue with a business case is that it is being asked to predict the future at a point. Absent a crystal ball, that asks the impossible.

I certainly don’t intend to show that the business case is grossly outdated as an investment decision making tool for innovative ideas. Many greater minds than mine have built that case. What I do wish to explore is the application of Monte Carlo to the problem of predicting ROI. It’s the question raised by the 3 elements of Innovation Accounting above. How do we make decisions to invest in an unpredictable, innovative idea? When should we stop investing in a failing idea? When should we double down on a successful one? Long live Innovation Accounting, indeed.

I was recently asked to review a business case that used some of the above traditional de-risking methods. The spreadsheet that backed the business case forecast 10 years into the future and contained 23 tabs most of which looked like this.

When I tried to zoom it out so it fit in one screen, the above is what appeared. That’s **572 columns of data**. WTF!!?? Why would anyone spend that amount of time creating something that is almost certain to be exactly wrong and not even serve the purpose it was created for — to clarify an investment decision?

# Things Have Changed

Thankfully, there is a better way.

According to Mark Jeffrey of the Kellogg School of Management: “Poor estimates and risk events are most often the drivers of surprise poor ROI”. Humans are terrible at estimation. So, no matter how good the model, estimated garbage in will yield garbage out. Good news! Humans can be calibrated to improve their estimation by practising anti-anchoring and other human psychology hacks intended to remove bias from the estimation process. I’ve calibrated dozens of people to vastly improve their estimation skills.

What about the risk events mentioned by Mark Jeffrey? This is where Monte Carlo comes in. It is a statistical analysis tool that uses brute force to calculate the most likely ranges of outcomes for ROI, profit, revenue, or anything else that has a formula with uncertain variables in it.

Combining these two techniques (ie. calibration with Monte Carlo) and applying them to business models is what I call The Unison Method. In my business modelling experience, Monte Carlo is not good enough on its own to remove all of the error created by estimation biases. Humans are so bad at estimating that calibration is a necessity.

Let’s get back to the monster business case spreadsheet above. What did I do differently? I applied The Unison Method. Here’s how it worked:

- Extract the product’s profit formula (yes, it was in that monster spreadsheet somewhere :))
- Calibrate the estimator
- Have the estimator estimate each uncertain variable in the profit formula at a particular time in the future — say 5yrs.
: This doesn’t work as you might think. The estimator is calibrated using a method that improves their ability to estimate a range that has a 90% likelihood of containing the true value. Note that this is very hard for most of us to do as we’ve been rewarded from early childhood to find the exact right answer. Hence the need for calibration. I provide more more explanation on how this works below.*Sidebar* - Run Monte Carlo on the profit formula

Those 4 steps took about 8 hours total. I was too scared to ask how many months the above spreadsheet took to develop :). Here’s what the result from The Unison Method for Year 5 Revenue looked like:

As you can see from this visual, you now have the data to make predictions as ranges with an assigned likelihood. For example, there is an approximately 90% chance (I’m just eyeballing the data) that Revenue in year 5 will be between $0 and $411,625. How else can the future be articulated if not as a range of possible outcomes with an assigned probability? That was asked rhetorically :).

# How To Make Humans Not Suck At Estimating

I want to bring your attention back to the 90% estimation interval for all of the uncertain variables in the profit formula. I’ll perform an example on myself to demonstrate how it works:

Question: How tall is the Eiffel Tower in metres? Please provide the answer as a range that has a 90% chance of containing the true value.

Answer: OK … I have no idea how tall the Eiffel Tower is, especially in metres. Let’s see what I do know about it. I know that it feels like if I laid it down on its side that it wouldn’t be a kilometre long, so it must be less than 1,000 metres high. Staying with the upper bound, it feels like it’s much less than a kilometre high. Could it be half a kilometre high? If I need to be 95% certain that the true value is less than the upper bound, I’d say 700 metres is my upper bound. As for the lower bound, it feels like it should be taller than 200 metres as I can visualize what 100 metres looks like from the 100 metre sprint at the Olympics. To make the true value 95% likely to be higher than the lower bound, I feel like I need to make the lower bound 200 metres. So, my 90% estimation range is 200 to 700 metres.

Notice that I didn’t arrive at a range by making an exact guess and then adding 45% to it to get an upper bound and subtracting 45% to get a lower bound. I treated each bound as a separate estimate. This is one of the anti-anchoring techniques delivered by calibration. By the way, the Eiffel Tower is 324 metres high. Phew, I’m in range :). You’ll have to trust me that I didn’t google it before I demonstrated on myself.

# A New Application For Lean Startup Experimentation

Let’s say that you were estimating this range for your profit formula and you thought it was too wide. Therein lies the hidden beauty of the 90% estimation interval. If you look at the result of The Unison Method and you don’t like what you see, you can go back to your formula and look for 90% estimation interval ranges that are too wide for your liking. These are prime candidates to collect more data about. *The wider the range, the less you know about that variable and the more risky it is*. Your 90% estimation intervals are another tool you have to identify your riskiest assumptions. Sound familiar? It should. This is simply another way to apply Lean Startup experimentation principles to your business model.

How often are you asked to predict 5 or 10 years into the future in a business case? What are your frustrations, if any, with the process? No matter if you agree or disagree with the above, I’d love to hear from you.

Happy forecasting!