Working In Uncertainty
Speech at BSI on 13th and 20th February 2004
by Matthew Leitch, 23 February 2004.
On 13 and 20 February 2004, the British Standards Institute held an event to explore demand for new standards in the area of risk management. The event was repeated because of huge interest. I was lucky enough to be one of the speakers and had the opportunity to argue for future standards that would support technical developments in the area rather than holding them back.
The following is not an exact transcript of the speech I gave on either occasion, but an amalgam that is as close as I can get to the intention of both speeches. This is the speech I was trying to give.
‘Good morning ladies and gentlemen. My name is Matthew Leitch and I'm interested in internal controls and risk management. My topic is “Problem areas for current risk management standards.” Actually I have in mind all official documents that want to tell us how to do risk management – statutes, regulations, official “guidance” documents, and standards.
Exciting future developments
‘I'm here because I think risk management has an exciting future. An exciting future of technical developments. These developments will make risk management more popular, give it more impact, and ultimately make it more valuable. We will be heroes – even more than now!
Areas of technical development still to come include these:
Speed: Techniques will come forward that make risk management quicker, easier, and more natural – but just as effective or more so.
Embedding: We'll work out exactly what “embedding” really means!
Personal skills: We've spent a lot of time looking at risk management as a corporate process, but what about the personal skills involved? We've hardly begun to understand and promote the skills and attitudes needed by ordinary managers to manage risk and uncertainty.
Lessons from science: There are many valuable lessons still to be integrated from science.
Ergonomics has much to teach us, particularly about how to present risk information.
Psychology can help us understand subjective ratings of risk, how people work in groups, and, in particular, help us understand uncertainty suppression – why people suppress their uncertainties and what to do about it.
Cognitive science can teach us a lot about how people really think through large, complex problems. The idea of problem solving as a search through a large problem space is particularly useful.
Mathematics is always difficult to learn from because it's hard to understand, but I hope we'll see a greater understanding of probability concepts, and perhaps also Bayesian ideas.
At the end of this speech is a list of references to papers exploring some of these developments and illustrating the kind of progress I have in mind.
Objectives and risks for a standard
‘Let's try to eliminate faulty practices and encourage experimentation and technical improvements.
That's a pretty generic objective for a new standard, but I want to draw your attention to the word “experimentation.” If people do not feel safe to try new things risk management will not develop as quickly as it could.
The risks we face in trying to draft a new standard include:
Not eliminating faulty practices.
Reinforcing faulty practices.
Not encouraging experimentation.
Blocking technical improvements.
Not encouraging technical improvements.
‘Here are some areas of risk management where we need to be especially careful in any new standard. That's because they are areas that, though important, are not well understood even by experts. The areas are controversial and experts disagree.
Existing standards tend to handle these badly, usually by being inappropriately prescriptive. They demand one approach and exclude others – sometimes giving advice that is quirky and unlikely to stand the test of time.
Upside risks: An upside risk is something that might happen that's better than some benchmark level. The benchmark is something we choose, but typically it is our planned or expected outcome, or the outcome we think “ought” to happen. In some areas of risk management the upside is more important than in others. In safety, for example, the natural benchmark is “total safety” (It would be distasteful to talk about how many people you “expected” or “planned” to kill or injure.) Consequently there is no upside to speak of. By contrast, in financial risk management it is natural to talk about expected returns and there's nearly always an important upside to consider.
Surveys show that most people interested in risk management would like to manage upside and downside risks in the same process, but in practice it is rarely done at the moment.
Existing standards usually include upside risks within their definition of risk, but then make no further mention of the upside! In one recent standard an approach to managing upside risks is given but it is strangely asymmetrical and I don't think it will stand the test of time.
Probability × Impact ratings: A different kind of problem is created by the common practice of probability and impact rating. It makes perfect sense, for an individual risk, to rate its probability of occurrence and then its impact if it should occur. However, the vast majority of items on risk registers are not individual risks. They are sets of risks. To rate these we need some form of probability distribution of impact, or an approximation of it. P × I ratings are not an approximation. They are the wrong type of object, mathematically. It's like using apples when you should be using pears.
I raised this in a forum last year and the audience divided into those who understood the point perfectly. In fact someone sent me a great case study showing how to do it correctly and just as simply. Others were outraged that a practice they had been using for decades was even being criticised.
Risk appetite cut off: This is related to the P × I issue. The way this is usually explained is with a matrix having probability on one axis and impact on the other. A line is drawn separating risks with unacceptable probability and impact from those that are acceptable. The idea is that risks on the wrong side of the line need a response to get them onto the right side of the line (i.e. within our risk appetite). Although it is true that different people have different attitudes to risk at different times, and that the boundary is a good guide to where to spend time thinking of responses, this risk appetite boundary idea is not adequate as a rule for deciding what responses to do. It cannot be correct as it takes no account of the cost of the risk responses. A better rule, depending on your organisation, might be something like “Maximise the risk adjusted shareholder value of the portfolio of actions we accept.”
Individual responses to individual risks: Virtually all official guidance proceeds on the assumption that you will think up individual responses to individual risks. There might be a sentence somewhere to the effect that it's not necessarily like that, but the format of examples and the general message of other text is clear: individual responses to individual risks. This is not the only way of working, and often not the best. In safety work, for example, it is often the combination of risks that is most important. In designing internal control systems – my particular speciality – it is usually quicker and more effective to think up whole frameworks of controls very quickly as a multi-layered architecture. I'll adapt a generic design, or quickly build one by assembling and modifying “prefabricated” components. Later I might map risks to the controls to refine coverage, but the thinking process of design is very different from individual responses to individual risks, and far more time effective.
Linear sequence: Another general assumption in official documents about risk management is that risk management will proceed according to a linear sequence. Usually it begins with something about objectives, then moves on through one or more phases of risk thinking to risk responses, and then perhaps monitoring. It might be joined round to form a cycle.
The way people actually think through difficult problems looks nothing like this. We seem to jump around. It's not that people are messy and the linear sequence is an ideal. Often what people are doing is following an “easiest first” rule, for example. There's a big difference between the theory and what really works, so I think here is another area that will develop in future. I suspect that the linear view will become a model for documentation, not a literal prescription for thought.
Other techniques: Finally, there are many techniques of use to risk managers that are not mentioned by any official documents. Frustration can result when someone uses a good technique that is not mentioned and faces resistance from colleagues and auditors as a result. I can think of a case where someone had designed a risk management system for the risk of failing to achieve key priorities. In addition to the expected approach of asking for judgements of confidence in achieving priorities she asked managers to give ratings on a selection of more objective risk factors. This was a useful basis for challenging unrealistic confidence ratings. “You say that the priority is a tough one and you haven't succeeded in past years. Why are you now so confident of success?” The thinking behind this approach needed careful explanation because it was not the usual approach envisaged by official guidance. I thought it was a great idea.
So, in all these areas, we need to be very careful what we write in a standard. It would be easy to block good ideas. What wordings could we use?
‘Let's consider some of our alternatives for just one of the areas of difficulty: the upside. How much for or against including upside risks should it be? How prescriptive should it be?
| || ||Mandatory?|
| ||Extra kudos if done? || |
|Constrained approach? || ||Prescribed approach?|
| ||Covered by standard if done? || |
| ||No comment? || |
| || ||Not allowed?|
In this illustration I've put options favourable to upside risk management to the top and those against it towards the bottom. I've also put options that are high risk to the right hand side, and lower risk options to the left.
By “high risk” I mean that the risk of making a mistake in writing the standard is high. In particular, contrast a prescribed approach, where we say risk management must be done in a certain way, with a constrained approach where we could put any constraint on the way people work short of saying what they must do. For example, we could say “Do it any way you like as long as you write down your approach”, or “...as long as you explain why you have chosen to do it that way.”, or “...as long as you don't do any of the following six illogical things.”
If we prescribe a single way to do risk management in any of the areas I have highlighted as problematic there is a very high risk of writing a standard that blocks technical development, and a high risk of writing something that turns out to be flat wrong.
‘But whatever words we use, however we approach standards, let's make sure they promote the exciting future development of risk management instead of standing in the way.’
Here is a selection of papers from my web sites illustrating the potential for technical development in risk management.
‘What makes evolutionary project management so effective?’ (answer = improves risk profile dramatically)
‘Rapid project risk management’
‘Everyday risk management’
‘Risk modelling alternatives for risk registers’
‘How to be convincing when you are uncertain’
‘How to talk openly about uncertainty at work’
‘The basics’ (i.e. of risk management – includes 7 techniques for busy people)
Words © 2004 Matthew Leitch. First published 2004.