‘Deciding
how to decide’ by Hugh Courtney, Dan Lovallo and Carmina Clarke, Harvard
Business Review, November 2013
This is the first of two interesting articles on strategic decision making in this edition of
Harvard Business Review. This first article has strong
connections with McKinsey; Courtney worked for them before becoming a US
business school professor and Lovallo advises McKinsey while also working as a
professor of strategy at Sydney University.
The
common underlying theme of both articles is that there are different types of
strategic decision but managers continue to use the same approaches and
techniques, without appreciating the fundamental differences. This first article suggests that the key
elements of difference are whether you understand the ‘causal model’ – in
simple terms the factors that will deliver success – and whether you can
predict outcomes with reasonable certainty.
If both these criteria are met, conventional techniques like discounted
cash flow are suitable to evaluate the decision; if not, other approaches are
required.
The
article describes a number of potential situations, based on knowledge of these
two variables. Where neither of these
two criteria are not met, other tools are required; for instance where the
causal model is well understood but predictions are difficult – maybe because
competitive response is uncertain - complex modelling tools like Monte Carlo
and Real Options analysis are required.
These techniques enable the simulation of a number of possible outcomes
and the use of statistical analysis to support the decision.
This led
me to have my first reservations about the approach because our team at MTP has
seen few day to day applications of these tools in our work with some of the
world’s top companies; this could be because the high calibre managers of these
companies are all falling into the trap suggested by the article but I think
this unlikely. A more likely explanation
is that, apart from one-off major complex projects, they have found these
advanced tools to be less practical than conventional evaluation techniques and
have instead adapted the latter by applying more flexible, dynamic analysis, in
particular the asking of ‘what if’ questions.
But despite these reservations I read on with as open a mind as
possible.
The
first suggestion is the application of Qualitative
Scenario Analysis, in situations where the model is well understood but
there is still a high level of uncertainty.
This seems to be no more than looking at a number of likely outcomes and
their consequences; hardly a great step forward. The most challenging situation is clearly where
the causal model is not understood and where accurate predictions are not
possible. An example of this would be
entering a new, developing market – either organically or by acquisition - and
fighting against competitors whose actions are impossible to predict. One answer might be to avoid such
‘opportunities’ because the risk is too high but instead the authors recommend
what they describe as an underutilised tool - ‘Case Based Decision Analysis’.
This approach involves the collection of information about analogous
situations from past experience in other sectors and applying these to support
the decision.
I would
have liked to see more details of this approach as it clearly has merit in
principle; however it assumes that there are analogous situations and that these
really can deliver lessons to improve decision making. There have been many past criticisms of
senior executives failing because they are trying to fight previous battles
when situations have changed in subtle ways and there could be similar dangers
with this approach. Are two complex
business situations ever really alike? Do lessons from the past always have
implications for the future? A number of
business school cases that we have covered on our courses would suggest
otherwise.
The
article moves on to make some interesting points under the heading of
‘Complicating Factors’. One is the
perennial problem of bias, which impacts any tool that tries to predict the
future. The authors suggest that most
decision makers are over-confident in their assessment of their own ability to
predict; thus their decision tree to determine which tool is used will be
distorted. Their answer is for the decision
making to be open and transparent, with judgments about the level of
predictability and the use of evaluation tools to be subject to challenge by
peers. The authors admit that this may
require a culture change in many organisations; how many CEOs would admit to
their team that they do not understand the business model and cannot predict
the outcome?
The
other key point made at the end is that few decisions require only one
evaluation tool; the best evaluations require a combination. An example would be discounted cash flow
evaluation combined with sensitivity, range and probability analysis, maybe
also a decision tree for multiple options.
In our experience this is what most top companies are already doing for
complex decisions and I see dangers in the suggestion that decisions can be
pigeon-holed into particular situations and the use of specific tools. The best decisions are often the result of
the use of multiple evaluations and tools which stimulate challenge and debate,
leading to informed and balanced judgments.
Read the article in full here; http://hbr.org/2013/11/deciding-how-to-decide/ar/1
See the second HBR article review here; http://alanwarner.blogspot.com/2014/01/what-makes-strategic-decisions-different.html
See the second HBR article review here; http://alanwarner.blogspot.com/2014/01/what-makes-strategic-decisions-different.html
No comments:
Post a Comment