Working In Uncertainty

The crisis in management control and corporate governance

by Matthew Leitch, first published 23 November 2002 (updated 6 January 2003 and 9 September 2003).

Contents


Introduction to the crisis and revised questionnaire

This document explains in plain language why certain risk management methods often adopted by major organisations in the UK and elsewhere in response to growing regulations on corporate governance and internal control have proved largely ineffective and time wasting in practice. This is not the result of incompetence or dishonesty; it is simply that requirements have increased and at the same time experience has shown that the techniques generally thought appropriate when first proposed do not work very well.

Since the first version of this questionnaire became available on the Internet I have received many e-mails from people around the world saying how good it is, and how challenging. Although I had initially intended it to apply to UK listed companies only, feedback shows that people in the public sector and in other countries with different regulatory requirements think it is powerful test and easy to understand.

This web page gives you information and insight that will help you decide if your own company has a problem with risk management, and whether to act on it. There's a detailed evaluation checklist so if you know what risk management is and just want to get ammunition or see if your own processes pass the test, skip the next section and go straight to the diagnostic questions. Alternatively, if you want to understand the UK background to the current crisis and read a description of risk management in corporate governance in plain English, carry on reading from here.

UK background to the crisis

In 1992, following a series of high profile corporate frauds and accounting scandals, the London Stock Exchange introduced new regulations covering various aspects of corporate governance such as who could be a director, what committees the Board of directors should have, and what steps they should take to ensure their company's accounts could be relied on and their assets were safeguarded. These new rules were based on the Cadbury Committee's Code of Best Practice for the financial aspects of corporate governance and applied to companies listed on the London Stock Exchange.

At around the same time a highly influential document was published in the USA, written by accountants Coopers & Lybrand for the Committee of Sponsoring Organisations of the Treadway Commission, and called the ‘COSO framework’. Accountants and auditors had for years been using the term ‘internal controls’ to refer to things people do in organisations to check for, or prevent, errors and fraud, particularly where they affect money and other valuable assets, and accounting.

The COSO framework took the traditional concept of ‘internal controls’ and pointed out that internal controls had to provide protection against risks (i.e. bad things that might happen) and that those risks would change over time, so organisations would have to monitor their risks and change their internal controls to meet their changing risks.

So, one of the things that companies started doing to meet the Stock Exchange's requirements was to get senior executives together in workshops to identify risks and think about what they were doing about them. The results of these workshops were written down and called ‘risk registers’ or ‘risk maps’. Typically, participants in the workshops would call out risks they thought of and the group would then rate the risk for its ‘likelihood’ and ‘impact’ and say what was being done about the risk and what more, if anything, needed to be done.

These workshops came to be called ‘risk management’ and, in theory, complemented more rigorous work on buying insurance for the company and calculating its exposure to financial risks such as currency fluctuations and outstanding debts. Banks have more complicated calculations to perform and need systems to provide daily risk statistics.

Proponents of this kind of process argued that it was good for companies and they should not even need the Stock Exchange's rules as motivation. They argued that the workshops should be carried on down through the levels of management in a company as something sometimes called ‘enterprise risk management’.

Another common response was to introduce a regular procedure where managers throughout the company had to sign documents saying that they thought the internal controls in the part of the business they were responsible for were adequate. This is usually called ‘control self assessment’. Often, this documentation would be the output of a workshop.

The original rules have since been revised and the current UK rules are within the Hampel Committee's ‘Combined Code’, with the requirements on internal controls being explained and interpreted in the ‘Turnbull guidance’ issued by the Institute of Chartered Accountants in England and Wales. Now UK listed companies have to evaluate their internal controls covering all types of risk, and not just the risk of incorrect accounts.

It is not surprising that risk workshops in most companies today are using methods and models that fall far short of the best available risk analysis and management techniques. The most common techniques were inspired by notions of risk and analysis used by accountants and auditors, which are non-mathematical and crude compared to styles of risk analysis developed in safety management, medicine, insurance, banking, investment, artificial intelligence, mathematics, and public policy analysis.

The Sarbanes-Oxley crisis

More recently Enron and then Worldcom collapsed, and yet more corporate scandals came to light causing outrage around the world. In the USA the Sarbanes-Oxley Act of 2002 was enacted very quickly to put in place a range of new laws to make such scandals less likely. Included in this Act were two very interesting new requirements concerning internal controls, including the risk management processes that are supposed to keep internal controls up to date. Section 302 effectively forced SEC registered companies (including UK companies with a listing in the USA) to evaluate the effectiveness of the internal controls over any information they issue to the capital markets and publish the conclusions of their evaluation. Section 404 added a requirement for an annual assessment of the effectiveness of internal controls and procedures specifically for financial reporting, which must be published and attested to by the company's external auditors.

In other words, for the first time, in most cases, the effectiveness of internal controls was to be audited and publicly reported. It may surprise you that this had not been required before. Surely external auditors were already doing this? Well they weren't. Under the UK's Combined Code companies have to describe the procedures they have followed to evaluate their internal controls and external auditors have to confirm that what they say is true. If a company's procedures sound reasonable when described in very general terms the regulations are satisfied. There is no pressure for the procedures to be effective and no requirement for external auditors to comment on the effectiveness of internal controls. Therefore, the Sarbanes-Oxley Act was a great change for UK companies that also have a listing in the USA.

The requirements of sections 302 and 404 didn't come into force immediately. The Act called on the SEC (the regulator of financial markets in the USA) to introduce rules to enact the requirements of section 302 and 404. Section 302 came into effect almost immediately, but did not require external auditing. The more controversial section 404 requirement for external audit was delayed after lengthy consultation on more than once occasion but now applies to large companies with shares listed in the USA.

The key point is that companies affected, in theory, now need to have an effective method of risk management in place if they are to avoid great embarrassment, and the indications are that many do not have an effective approach because of technical flaws in top-level risk assessment and management.

I say ‘in theory’ because in practice the true effectiveness of risk management workshops, risk registers, and the associated reporting has not been put to a proper test. The auditors who do the evaluations are simply happy to see the methods they believe should be in place, despite their obvious flaws.

But, this could change at any time. All it would take is one influential scandal or a growing trend for critically reviewing risk registers and the game would be up.

The management control crisis

At the same time, the global economy slowed and many companies that thought they were heading towards huge profits now found themselves in trouble. The worst affected companies include those linked to the internet (such as computer, software, and telecom companies) and companies linked to air travel.

Companies in difficulty are less able to absorb unexpected problems and desperately need to grab every good opportunity to improve their situation. Unfortunately, the style of management control that has become almost ubiquitous in developed countries since the mid 20th century does not perform well. Budgets and scorecards are supposed to provide management with a control mechanism that works like a thermostat, or collection of thermostats. Management set targets and the control system measures actual results and feeds back the difference between actuals and targets as a spur to action to reduce the differences.

This simply does not work well in practice for a number of fundamental reasons. Most importantly, problems have to affect a company's results before action is taken, which is too late, while opportunities are often ignored altogether because they do not give rise to variances.

Risk management involves looking ahead for things that might happen and taking action in advance. In principle this is clearly an important part of a better approach but so far what most companies have been doing is not frequent or effective enough to work properly.


Diagnostic questionnaire

This questionnaire gives a number of diagnostic questions for you to consider. It is in three sections. If your interest is mainly in whether your existing process has flaws which may be challenged by an external auditor you may prefer to complete only Section A. If you are also interested in whether risk management is doing something useful for your company take the time to complete sections B and C also.

You can print it off and write your answers in pen, or complete it on screen and print when you have finished. There is a button at the very end that will produce a convenient summary of your answers that you can be Copy and Paste into a word processor or e-mail. If you leave this page before printing you will lose your work, though you can go Back to it from the summary page.

It doesn't matter if you are not sure what the answers are in some cases, but these questions work best if you have a copy of a recent risk register from your company and written procedures for the risk management process to hand so you can check it for evidence of various faults. Each point is explained in practical terms so that you can make up your own mind as to how serious the problem is if it exists in your company.

The questionnaire is completely confidential. Your answers are not sent back to me or anyone else so if you don't print them off they're gone. If you use this diagnostic I don't expect you to let me know your results, but I would be very grateful if you could at least let me know you intend to use it so I can see that something is happening. I will keep the fact that you have used my questionnaire confidential. Send me a quick e-mail at matthew@workinginuncertainty.co.uk.


This questionnaire was completed by:


Section A: Reliability

The following questions test for some simple technical flaws that may be undermining the reliability of the risk management processes in your company. As far as possible they are factual questions, though some judgement will be needed if you think other precautions mitigate the issues the questions point to. There is space and freedom to bring in other factors before taking a view of the likely impact of the issues revealed.

Please read the explanation under each question before selecting your answer.

Question A1: Are risks identified solely by ‘brainstorming’ i.e. volunteered?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

In many companies the risks on the register are only the ones suggested by people in workshops. Although giving people freedom to make suggestions is useful and should be done it is very dangerous to rely entirely on risks being volunteered because this causes a number of problems:

  • Key risks are omitted: People usually do not volunteer risks that might be perceived as criticism of people they are afraid of, or that the volunteer might be asked to manage, or that might make the volunteer look like a negative thinker or someone who doesn't know his/her job. Who would suggest ‘Collusive fraud by the Directors’ as a risk? Who would then have the nerve to rate it as a High risk?

  • Risks are unsystematic, inconsistent, and ill defined: Because the ‘risks’ arise from different perspectives they tend to be different ‘sizes’ (some very broad while others are narrow and specific) and arise from different, inconsistent ways of carving up future events. Inevitably definitions are poor as people summarise their ideas for the flip chart and because people have different ways of using words, even in the same company.

  • The risk register is dominated by the ideas of the fluent few: Some people make far more suggestions than others, just because they are fluent or motivated. This can mean that other functions or perspectives are under-represented.

Question A1.1: If risks are volunteered – are the suggestions undirected?

Answer : Not applicable:   Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

If all risks are volunteered the problems are less if some kind of framework of generic risks, risk categories, or risk factors is used to help expose missing risks. This usually looks like a table the group has to fill in that already has some headings in it. This does not eliminate the problems of volunteering. Some risks are so rarely volunteered that they should be placed on the register by default and rated using factual diagnostics rather than by judgement.

Question A1.2: If some or all risks are derived from internal audit work – were the audits chosen as a result of requests from business managers and/or unsystematic audit ideas put forward by internal auditors?

Answer : Not applicable:   Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

Risks arising from Internal Auditors and their work are likely to be more objective and subject to less self-censoring than those from business managers, but the problem still applies. The point of question A1.2 is to test whether the fact that risks have been volunteered is obscured by them being filtered through Internal Audit.

Once again, if the audit coverage of Internal Audit is derived from a comprehensive risk assessment then the risk of incompleteness is much less. This kind of analysis is systematic and should complement brainstorming.

Question A2: Is risk identification driven solely by the explicit objectives of your organisation?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

When organisational objectives are written they usually concentrate on improvements and positive achievements. Chairmen rarely announce that their objectives include reporting financial performance accurately, not breaking the law, and killing as few people as possible. These are usually taken as givens.

If risk identification is based on explicit objectives then risks related to these ‘givens’ can get lower priority than they should have. Internal auditors who have moved on to auditing items on their organisation's risk register have sometimes found that traditional basics like financial systems appear not to need auditing!

The explicit objectives are important and should help to prioritise risk ratings and responses, but they should not be the only objectives considered.

Question A3: Are multiple risks rated as if they were individual risks?

Answer : Not at all:   A few:   About half:   Most:   All:

The most common approach to rating risks is to judge them for their likelihood of occurring, and then for the impact if they did occur. This is sensible for individual risks, but not for multiple risks because the impact rating is impossible. If your company's risk registers have this fault then the ratings are illogical and undefined. They are meaningless and the fault should be corrected.

However, multiple risks are extremely common on corporate risk registers. This is not always obvious so if your register has separate ratings for likelihood and impact please read the following and check your risk register for these types of item.

  • General category names: For example ‘Health and safety’ is not a specific risk. It is a way of referring to all the risks in a category.

  • Lists of risks: Sometimes people actually list more than one risk explicitly within a single row on their risk register table.

  • Events that could occur more than once: For example, if your company has several data centres you could have several explosions at data centres so ‘Explosion at data centre’ is really a set of risks whose elements are ‘1 explosion at a data centre’, ‘2 explosions at data centres’, ‘3 explosions at data centres’, and so on.

  • Risks of variable extent: For example, an item like ‘Loss of market share’ is not a single risk but a set of them, one for each possible extent of loss of market share. Obviously a loss of 2% has a different impact to a loss of 20% and probably a different likelihood as well. Sometimes people try to deal with this by having more than one risk on the register. For example, ‘A small loss of market share’ and ‘A large loss of market share’. This shows an awareness of the problem but it is hardly a solution.

The logical approach to rating a set of risks is to construct a distribution that shows probability (or probability density) as a function of impact. For a risk register this can be shown as a set of graphs and some statistical summaries can be used for more compact presentations. However, there are alternatives, see ‘Risk modelling alternatives for risk registers’.

Question A4: Is the time horizon for risks unspecified?

Answer : Always specified:   Usually specified:   About half:   Mostly unspecified:   Always unspecified:

Suppose the risk register item in question is ‘Fire at our data centre’. Does that mean by any particular date? Obviously it makes a big difference whether you are talking about this week, this month, this quarter, or this century. The horizon should be specified, either for each item or by default for all items on the register.

Question A5: Are risk ratings defined without reference to probability or money/utility?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

Many companies rate risks using scales like ‘High/Medium/Low’ or a number from 1 to 5, but the meaning of each rating is not tied back to probability or some measure of value like money, so is undefined. This was advocated by many people on the grounds that it was simple and easy. However, on this argument doing no ratings at all is the most desirable as it is even easier and just as ineffective.

It is obvious that these ratings are undefined and do not help when it comes to making decisions about what actions, if any, to take. However, there are some less obvious problems as well.

Question A5.1: If not defined using probability and money – were ratings given by different groups of people, without normalisation?

Answer : Not applicable:   Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

Faced with undefined rating scales people respond in different ways. Some tend to keep to medium ratings, while others prefer to use ratings near the extremes. This is a well known issue in questionnaire design and one way to mitigate it is to normalise the ratings of each person so that their spread and mean score are the same before combining the scores with other raters.

If your company has workshops with different participants and leaders but does not normalise this means the ratings cannot safely be compared. If one group prefers extreme ratings then the risks it is most concerned by will dominate the top of the risk league table for no good reason.

Question A5.2: If not defined using probability and money – were ratings given by people in different divisions or subsidiaries and combined without adjustment?

Answer : Not applicable:   Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

One interpretation of a rating scale not defined in absolute terms is that it is relative to other risks under consideration. In a group of companies it is common for the Board of each subsidiary to compile a risk register which shows which risks it is most concerned with, then combine the registers to get a group-wide view.

However, subsidiaries tend not to be of the same size or importance to the group. If this is not taken into consideration and ratings are relative then the High risks of an insignificant subsidiary will out-rank the Medium risks of a huge subsidiary, often incorrectly. The same can apply to divisions. Some kind of adjustment is needed, though it is better to use properly defined ratings in the first place.

Question A6: Are risk ratings grouped into a small number of levels, such as High/Medium/Low?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

On top of all the problems already discussed, the crude rating systems used in most early risk registers often lead to excessive quantisation. In other words, the small number of levels mean the ratings are too broad to be useful.

Question A7: When impact ratings are made are people asked to make direct estimates of the ultimate impact even though it is hard to judge?

Answer : Not at all:   Sometimes:   About half the time:   Usually:   Always:

Consider a question like ‘What is the impact on shareholder value of a computer glitch that leads to complaints from 2% of our customers over a 6 week period?’ This is quite a clear question, but very difficult to answer directly. It is much easier to make estimates of the impact on a customer's attitude to future purchases, and then the profit that might have been made from any lost purchases. You can imagine that a spreadsheet model might be needed to compute the impact in stages through to cash effects.

Many risk assessment workshops call on participants to make direct estimates of the ultimate impact for items where this is asking too much. It is better to have a quantitive model of the business and ask people for estimates to input parameters from which the model computes the ultimate impact.

Question A8: When ratings of probability are made, are the assumed conditions unclear?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

Sometimes when people are asked to estimate a probability they do so as if it is a conditional probability. For example, without really thinking much about it someone might say ‘I think the probability of selling more than 1m units is 0.6’ without mentioning that they are assuming general economic conditions remain the same. Alternatively, it may be that you are trying to elicit a conditional probability, but the expert you ask thinks about the wrong conditions, or forgets to apply the conditions.

Either mistake can lead to inaccurate probability estimates. Overly narrow distributions for future outcomes are the most common.

Question A9: When ratings are made, is the assumed mitigation unspecified?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

Mitigation means things done to make risks more or less likely or modify the impact if the risk occurs. When rating likelihood and impact (regardless of the technique used) it is possible to make various assumptions about what mitigation is in place.

Obviously ratings are unusable if people don't know what mitigation to assume.

Question A9.1: If the assumed mitigation is clear – are there ratings assuming no mitigation at all?

Answer : Not applicable:   Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

The alternative levels of mitigation that could be assumed when rating risks are:

  • No mitigation at all: This is normally extremely difficult to do as some mitigation is usually in place already and people have experience of the mitigated reality but not of the hypothetical unmitigated situation. For example, the risk of fire assuming no fire precautions were taken is not something people can really judge.

  • Current mitigation only: This is the easiest to rate.

  • Proposed mitigation: This means current mitigation as modified by proposed changes.

The most useful and feasible ratings are the last two, which are also the ones envisaged by the Turnbull guidance. It is normally a mistake to do ratings assuming no mitigation at all because they are not needed and not feasible anyway. If you want to challenge the necessity for existing mitigation you can do it by proposing a lower level of mitigation and judging the effect.

Question A10: Are risk ratings unsupported by data even when they could be supported by data?

Answer : Never supported:   Sometimes:   Neutral:   Usually supported:   Always supported if possible:

Subjective ratings of risks are better than no ratings at all, but ratings supported by data are much better still. Many risk ratings are unsupported by data even when data is available and the analysis would be feasible and worthwhile.

Question A11: Is ranking of risks undermined by unequal levels of aggregation?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

Take two risk sets ‘Accidents injuring staff’ and ‘Accidents injuring staff while making drinks.’ Clearly the latter is a subset of the former, so its risk is smaller. It is easy to see that these risk sets are not at the same level of aggregation. But what if they weren't related? How could you compare the level of aggregation between ‘Accidents injuring staff’ and ‘Fire damaging computer equipment"?

It may be that when you look at the ‘top 10’ risks on your risk register some of them are there mainly because they are the most aggregated.

There are some sensible ways to get round this problem. The risk register is a means of prioritising management time and attention. The risk register items need to be the appropriate prioritisation units. For example, if a meeting is held regularly to review progress on a set of projects and each project has a manager who might be called upon the provide an update, the prioritisation unit is a project. With a list of projects, each with a risk rating, the ratings serve as a guide to how much time the meeting should expect to give to the risks on each project.

Another approach is to aggregate risks into sets so that each set contains roughly the same amount of risk. This maximises the information content of reports based on the risk register (see the section on relative entropy in ‘Design ideas for Beyond Budgeting management information reports"). Each risk set gets equal attention.

Question A12: Are the risk ratings used to filter risks?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

If there is any reason to doubt the reliability of risk ratings (either likelihood or impact, or both) the impact of mis-rating is greater if at any point risks are filtered on the basis of their ratings.

Question A13: Are the risk registers/maps out of date?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

"Out of date’ means people were not aware of the risks and had not assessed them at the point where that knowledge was, or should have been, used. A risk register is likely to be out of date in this sense if it is updated quarterly or even less frequently according to a calendar rather than being driven by events. Some quarters will be fine, and then something will happen that needs an immediate update of the company's view of its future.

Summary for Section A

Note here any other factors relating to the completeness of risk registers.

Taking everything into consideration, does this company need to change its methods to ensure that key risks are not omitted?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

Note here any other factors relating to the ratings of risks or sets of risks.

Taking everything into consideration, does this company need to change its methods to ensure that key risks are more accurately rated?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

Note here any other factors relating to whether the risk registers are up to date.

Taking everything into consideration, does this company need to change its methods to ensure that its risk registers are up to date?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:


Section B: Usefulness and workshop productivity

The following questions look at whether risk management is useful and well regarded in your company. This involves some subjective assessments and answers will depend in part on who gives them. For example, a Risk Manager is likely to believe that risk management is more popular, important, and well regarded than would others in the company. Nevertheless these questions need to be asked and the explanatory notes give objective indicators to help limit the effect of subjectivity.

Question B1: Do the workshops fail to lead to worthwhile new ideas and actions?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Fail:

The Turnbull guidance says that the risk management process should confirm that internal controls address current risks, but in practice the expectation is that it has a more active role in actually making this the case. If the risk workshops do not generate new insights, ideas, and actions they are probably not useful to your organisation.

In some cases risk registers are little more than the company's existing plans redrafted in the form of a risk register. These contain items like:

  • Objective: Customer satisfaction.

  • Risk: Customer dissatisfaction.

  • Control: Customer Satisfaction Programme – see Strategy document.

Question B2: Do the workshops use the Objectives -> Risks -> Controls format, without reference to risk drivers including actual events and external forces?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

In the COSO framework, by definition, a risk is only a risk if it affects the company's attainment of its objectives. This has led many people to start their risk analysis by listing their objectives and then proceed straight on to listing risks.

This often leads to a boring and inwardly focused analysis that adds little value, with items that read very much like the example given for the previous question.

It is essential to survey internal and external dependencies, trends, events, and projections and use these to identify risks relevant to the company's objectives. The format of workshops and documents should reinforce this.

Question B3: Do the workshops lack a step for identifying actions to monitor risks?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

It's obvious that ‘controls’ or ‘mitigation actions’ need to be set against each risk or group of risks that are of interest, but it is not so obvious that some kind of monitoring action is also needed so that future evidence about the risk is effectively searched for and used. This helps ensure the risks are updated promptly.

Having ‘owners’ for risks is not enough on its own unless some sources of evidence have been identified.

Without a monitoring action there will be no updating until the next, scheduled risk assessment.

Question B4: Is there no clear link between the risk management workshops and controls over major business processes and systems?

Answer : Definitely linked:   Probably linked:   Neutral:   Probably not linked:   Definitely fail to link:

There is no clear link between the risk workshops and major business processes unless the workshops specifically consider the factors that drive what controls are needed for major business processes. These factors include:

  • New or changed systems or procedures: Changes require adjustments to controls. As new systems are being developed or implemented the health of the project is an indication of the need for checking and management monitoring once the new elements go live.

  • Volumes: The volumes, variability of volumes, and rate of increase/decrease are all vital indicators of the need for controls and the type of controls that will be most cost-effective.

  • Data complexities: These have a strong effect on error rates.

  • Competitive strategy/main performance priorities: These constrain the controls that can be used as well as determining the performance requirements on controls. If the strategy changes the internal controls should at least be reviewed.

  • Geography: Different controls are needed if people are in different places rather than all in one office.

  • Staffing: Increases, decreases, and churn of staff are all important in determining the amount of work people can do and the skill and accuracy of what they do.

  • Learning from current performance: Even the best risk analysis will not be accurate so it is vital to capture, summarise, and monitor statistics on process health and act on them.

If information on these and other relevant factors is collected and used at risk workshops it will be possible to identify the need for change and assess current effectiveness.

This kind of assessment, where applied, is compatible with the requirements of the Sarbanes-Oxley Act in sections 302 and 404. It is questionable whether just assessing the design of internal controls to see if they appear to be sensibly designed meets the requirement to assess effectiveness.

Question B5: Is the process unpopular (allowing for politeness)?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

Feedback from workshops will normally show them to have been productive and worthwhile, but this is not a reliable indicator. More reliable indicators include the reaction to suggesting more frequent workshops, or workshops for lower levels of management, or more preparation or detailed analysis.

Is the Risk Manager seen as an overly cautious pessimist whose answer is usually ‘No’ or as someone whose role is to open minds to the full range of potential outcomes, including opportunities?

Question B6: Do the workshops and registers consider only bad things that might happen?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

One reason many people do not like to spend too much time on risk management is that most risk management processes consider only bad things that might happen (sometimes called ‘downside risks"). Dwelling on these without also considering unexpectedly good things that might happen is often a rather depressing experience that many people feel threatens teamwork and confidence.

A recent trend in risk management is to combine risk and opportunity management into one process so that it is balanced and the main aim is to open up thinking to the full range of likely outcomes.

Summary for Section B

Note here any other factors relating to the usefulness of this company's risk management workshops, risk registers, and other related procedures.

Taking everything into consideration, does this company need to change its methods to make these efforts more productive?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:


Section C: Contribution to good management

These questions continue to probe the usefulness of risk management in your company by looking at the extent to which risk and uncertainty are taken into consideration in decisions and as a means to control the company. It is not generally agreed how this should be done, but the questions below test for practices that experts in them consider essential.

Unlike the questions in sections A and B above, the following points are unlikely to be taken into account by external auditors at this time. Nevertheless, the answers are important for the effectiveness of management control.

Question C1: Is budgetary control or a similar system using fixed targets relied upon to keep the company ‘on track"?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

The question here is whether negative feedback control loops (typically budgetary controls) are seen as the way your company reacts to unexpected changes or whether risk management (i.e. anticipation and action in advance to manage a range of potential outcomes) is the primary means of dealing with an uncertain future. In answering this question consider which currently takes up most management time, which generates the most reports, and which most strongly affects how managers are assessed and rewarded.

The style of management control traditionally favoured by accountants is one where targets/budgets are agreed and then for a period of a year the difference between actual results and the targets/budgets is reported as ‘variances’. This is supposed to work like a system of thermostats (i.e. negative feedback control loops) allowing management to set the results they want and then rely on incentive systems to drive variances down.

This is not very effective in dealing with uncertainty about the future. It means that results have to be different from budgets before action is triggered (assuming the variances can be understood) and by then there often is no action that can get the company back on track.

Budgetary control and negative feedback control loops generally are particularly poor at dealing with unexpectedly good outcomes. If things turn out better than expected that should be a signal to managers to rethink their plans and try to take advantage of their good fortune. With budgetary control this rarely happens.

Risk management is about looking forward to things that might happen and taking action in advance where appropriate so it clearly has a role in management control. While the formal workshops held for Turnbull compliance may not perform this function in your company, management meetings are certain to consider risks and action in advance to some extent.

Question C2: When decisions are supported by quantitive models, are key independent variables each modelled by a single, best estimate (possibly with pessimistic and optimistic alternatives) even when they are not known for certain?

Answers :
Decisions > £100m : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:
Decisions >   £10m : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:
Decisions >     £1m : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

The point of this question is to test whether risk and uncertainty are properly modelled and considered in decisions made in your company. Human intuition about uncertainty is notoriously unreliable so it is vital for big decisions to be supported by quantification that handles uncertainty correctly.

If key input variables (e.g. expected demand for a product) about which there is uncertainty are represented by a single number, which is the best/average estimate, the model is likely to give incorrect results because of the ‘Flaw of Averages’. The Flaw of Averages is the assumption that average inputs give average outputs, so even if there is uncertainty about input variables people think it is safe to use averages throughout.

In fact average inputs give average outputs only if the model is linear, which hardly any realistic business models are. Sometimes the size of the error resulting from this flaw is large and completely unsuspected.

The correct approach is to represent key input variables about which there is uncertainty using an appropriate probability distribution, which shows the relative likelihood of the variable being at any particular value. Even if this distribution can only be estimated subjectively the results will be more accurate and useful than if a best guess is used.

Sensitivity analysis is not a solution to this problem because it involves varying one variable at a time. Optimistic and pessimistic estimates are generally not a solution because ‘optimistic’ and ‘pessimistic’ are rarely defined in terms of probabilities and, even when the probability of each is defined, optimistic and pessimistic values for multiple variables cannot sensibly be combined.

Question C3: When decisions are supported by quantitive models, are all key output variables displayed as a single, best estimate (possibly with pessimistic and optimistic alternatives)?

Answers :
Decisions > £100m : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:
Decisions >   £10m : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:
Decisions >     £1m : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

This is the usual consequence of modelling uncertain input variables incorrectly and tends to drive thinking that is far too narrow. People start planning on the basis of the most likely outcome rather than the range of likely outcomes. No credit is given for managing outcomes that are not the most likely, even though the one thing we can be reasonably certain of is that the outcome will not be the most likely one.

A better method is to show the probability distribution of key output variables, such as the NPV of a project.

With a model that does not express uncertainty explicitly you can ask ‘How much money will we make?’ (and probably get the wrong answer). With a model that shows uncertainty explicitly you can ask ‘What is the probability that we will make at least £X from this?’ or even ‘What is the probability that this will bring down our company?"

Question C4: When decisions are supported by quantitive models, are the effects of options to take future decisions ignored?

Answers :
Decisions > £100m : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:
Decisions >   £10m : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:
Decisions >     £1m : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

Another feature of most financial models is that the future is assumed to be set when in fact there will be many opportunities to make decisions in response to unfolding events. This can also give some large errors.

It is better to identify such options and even to plan projects to increase them. Intuitively most managers know this but financial models need to reflect it by combining discounted cash flow methods with decision trees.

Question C5: Do your company's leaders make it clear that they expect to see delivery of originally committed results according to original budgets and deadlines, and that they expect managers to be certain in their minds and actions?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

Unfortunately, these behaviours tend to push people into premature commitments to overly specific targets, to suppress/deny uncertainty, and to do nothing to manage it. If things go badly people keep it secret hoping to recover. If this doesn't happen then eventually the bad news will come out – often too late for anything to be done.

Leaders should make it clear that they are looking for excellent risk management and planning, and should encourage people to be open about uncertainties, discuss them realistically, and manage them.

Summary for Section C

Note here any other factors relating to the contribution of deliberate risk management to running this company.

Taking everything into consideration, would this company benefit from strengthening mechanisms by which risk is managed in decision making and steering the company?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

Note here any other factors relating to expressing uncertainty in financial models.

Taking everything into consideration, would this company benefit from expressing uncertainty explicitly in more of its financial models?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

Note here any other factors relating to the way management's behaviour encourages or discourages people from being open about uncertainty and managing it.

Taking everything into consideration, would this company benefit from a style of leadership that gave more encouragement for good management of risk and uncertainty?

Answer : Definitely No:   Probably No:   Neutral:   Probably Yes:   Definitely Yes:

Finally

Don't forget to print this off now or you will lose your work and click on the ‘Display summary’ button below if you want a convenient summary of your answers. Please let me know you used my questionnaire by sending me a short e-mail at matthew@workinginuncertainty.co.uk. The fact that you have done so will remain confidential.






Made in England

 

Words © 2002 Matthew Leitch.