What’s wrong with risk matrices: criticism and defense

Methodological debates are an inherent part of the risk management profession. A significant part of that attention is often directed towards risk matrices, being criticized by experts and thought-leaders. So let us address the issue and explain some of the existing criticism concerning risk matrices.


In this article I will argue that risk matrices are a useful tool in risk management when they coincide with a collaborative approach and a risk analysis platform such as Inclus. With modern tools, risk matrices can function as an effective medium for compiling risk information, building common understanding, and decision-making in complex environments. Read more about the benefits of inclusive risk analysis processes in my other article.

Surely, risk matrices have their own limitations as their applications are many and organizational contexts vary. It is beyond the scope of this blog to discuss all of them, as risk matrices are one of the most discussed and criticized topics in the profession. Yet they are a widely used method in several organizations and projects, which indicates some degree of usefulness, or the lack of imagination. But let us look at some of the issues that they are being criticized for.

  1. Risk matrices are not measuring any meaningful quantities of risk

  2. Risk matrices are poor estimates due to inherent scale ambiguities and inconsistencies

  3. Risk matrices are lacking scientific basis and “objectivity”

  4. Risk matrices are not capable of advising on how much one should invest in risk controls

Let me address these concerns in order, as this criticism completely overlooks the flexibility of inclusive risk matrices and the value they add.

Not measuring any meaningful quantities of risk

Risk matrices are not measuring any meaningful quantities of risk, such as expected monetary losses (see Hubbard 2020, p. 64). According to this criticism, quantifying risk in terms of clearly measurable metrics such as financial losses, can help in communicating the magnitude of the impact, and be helpful in steering decision-making.

Well, the modern risk matrices – such as in Inclus –  are not incompatible with risk quantification in any way. The rest of it is true, to some extent. But there’s more to it.


Example of a cost impact assessment in Inclus.

No doubt, understanding financial losses is very useful in satisfying the needs of decision-making, if it only can be achieved. People tend to trust numbers, even though those metrics would be imperfect. See, for instance, a great critical examination of the quantitative models and their popularity by Porter, Trust in Numbers.

However, debating between quantitative or qualitative research methods, feels like going back decades in the history of social sciences: the choice between the two is clearly a false dichotomy. There is a need for both types of information, and most of all, they do not exclude each other. Studies and methodological literature in effectiveness evaluation remind us that it is very useful to try to understand different dimensions of the impact, which cannot be collapsed into a single number, or estimated by one person. This applies especially when talking about risks that deal with people, extremely uncertain operating environments, or complicated social and organizational systems, such as complex business or project environments. You cannot build reliable models to quantify all those risks, as you cannot model human behavior.

Consider, for instance, key personnel risk of a project. You could quantify the risk by considering the costs of a new hire, but without asking the rest of the project team, you know very little about the persons contribution to the project and potential problems that might lie ahead (schedule, quality, cultural fit, etc.). Your colleagues are often in a privileged position regarding their know-how. You need to harvest that information as a risk manager.

Inclusive analysis processes add value to both types of estimates by increasing their reliability. Especially when there are no sufficient reliable models to construct the (future-oriented) estimates. Estimating financial losses can be extremely hard and a subjective endeavor if it isn’t based on the best available information or has not been critically scrutinized.

To some degree, it is also a bit silly to try to find “objective” numbers for everything. One of the lessons from the literature concerning evidence-informed decision-making is that even the most ambitious cases of knowledge-production for decision-making (e.g. randomized controlled trials), require active leadership and choices on the type of impacts, and methodological choices (which is often not there even in science).

“Objective” advanced methodological choices run the risk of failing to communicate hidden methodological choices and assumptions, which can create more confusion within the organization if the agenda has not been carefully determined (see for instance the literature on inductive risk, e.g. Douglas, 2000). What most traditional risk matrices fail to capture is that the social process that produces those numbers should not try to force a perfectly unified “objective” view by mere calculation, where it cannot be achieved. Rather those ambiguities and inconsistencies should be made visible to stakeholders so that they can draw their own conclusions and discuss them as experts and decision-makers.

Even if you would find perfect leading risk indicators and metrics, and follow them carefully, there are questions that do not disappear: are we measuring the right things, is there other risk information within the organization that is not captured by the indicator, how to interpret the current situation, has the situation or needs changed, are there other risks or goals that have been identified since setting up the metrics?

Scale and ambiguities and inconsistencies

Risk matrices are poor estimates because of inherent scale ambiguities and inconsistencies (see Hubbard 2020, pp. 169, 174). Some people worry that scoring risks is useless as one cannot unambiguously define the scale options in a manner that would be perfectly understood by participants.

Well, language and communication always involve several ambiguities and inconsistencies no matter what. It does not go away by collapsing complex phenomena into a single number or dashboard. This is precisely why there is a need for truly collaborative tools and shared dialogue about risks: their nature, magnitude, and required measures to control them. We use simplifying and vague language every day, which easily hides our ambitions, fears, and expectations that suddenly become relevant when we start discussing risks, uncertainties, and succeeding.

Too often, we expect that we know what our colleagues think, value, or fear, until we realize that we didn’t. The point is not to remove all ambiguities and inconsistencies by methodological choices, but rather directly and explicitly addressing those ambiguities and inconsistencies making them visible. They exist in the organizational setting and among the diverse professional groups regardless, whether you admit it or not.

Here too, an inclusive analysis process adds value to the risk analysis process by enhancing dialogue and making different know-how and interpretations of the risks visible. It offers a constructive and neutral channel to engage with the colleagues with the proper mind-set and tools that are required in succeeding and active management of uncertainties.

Lack of scientific basis and objectivity

Risk matrices are lacking scientific basis and they are “too subjective” or “mere guesswork”. This criticism often builds on behavioral economics and especially the notions of biases and heuristics (see for example Hubbard 2020 pp. 135-162, 169). Those notions were made popular by Tversky and Kahneman. Biases limit human rationality making people to some extent an unreliable source of knowledge. Consider how one can discount their own future well-being over more immediate but lesser rewards (which goes against what we would expect from a rational agent), an expert’s overconfidence in their own thinking, or how people tend to see realized events as necessary developments that had to happen. These are examples why we cannot trust individuals in all instances.

Well, it is true that individuals are not rational and know everything. However, a group of experts together can provide a corrective mechanism to individual biases and inconsistencies. This should not be a surprise.  Even the most objective method, the scientific method, relies largely a social self-corrective and cumulative learning process through peer reviews and research efforts. There is also very recent research that highlight the importance of the expert communities even in the production of the scientifically valid information (see Koskinen forthcoming).

Inclusive risk analysis ensures that the best arguments will prevail. Various expert insights will increase common understanding and provide useful resources for risk mitigation actions. The process results in that experts become more responsive to the views of others and tend to converge, ultimately strengthening collaboration.

I think there is a professional tendency to admire more rigorous and advanced methods that can showcase the brilliancy of the risk manager. There’s certainly no harm to have more tools for different situations, but sometimes focus on the social process itself can serve the needs better. All methods, even more advanced and scientifically most sound ones, will always introduce new assumptions and uncertainties. Uncertainty, lack of data and knowledge, and incapacity for prediction all characterize risk management profession and there’s no way to remove them entirely.

Additionally, there’s another threat lurking behind the corner when more advanced methods are being introduced. It concerns the division of intellectual labor and the lack of interconnectedness between knowledge-production and decision-making. It has been directly inherited from how scientific knowledge-production and decision-making have historically been separated from each other into their own boxes (see Douglas 2009). As is the case with the distinction between risk analysis and risk management: a somewhat artificial classification into “objective” analysis on the one hand, and value-laden decision-making on the other hand. This had historically good reasons for it (such as protecting objectivity of knowledge-production from political intervening in certain contexts), but which nevertheless does not capture the decision-making anymore (see Douglas 2009).

Consider, for instance, Deloitte’s latest report on Enterprise Risk Management in Finland, where 67 % of the respondents thought that the risk management function “advice and challenge Board of Directors’ and Executive Management’s strategic planning” only “to little extent”, while 17 % thought “not at all”. In the same fashion, 67 % responded that information generated by the risk management function is being used only “to some extent”. Introducing more advanced methods for knowledge generation runs the risk of detaching from the needs of decision-making, when not carefully guided.

Not supporting decision-making

Risk matrices are not capable of advising on how much one should invest in risk controls compared to other methods such as cost-benefit analyses (see Hubbard 2020).

Out of these four critiques, I would be the most concerned about this one. I think this is a very good and complex issue that we have already touched upon in this article. Risk matrices are only one medium for risk-related information and decision-making. I think it is also debatable how much risk matrices even link with the decision-making in some occasions. Sometimes they are just tools to discover new things, opening possibilities for further decision-making and risk control. According to the principles of ISO 31000, the process should be continuous and frequent. However, there are many other possibilities to link risk information and decision-making as well. One such example is multi-criteria decision making (MCDM) that coincides very well with an inclusive approach.

However, if there is no leadership steering the process and evidence-collection for the needs of the decision making, it might make sense to start with risk matrices and to ask questions about the biggest threats, the magnitude of impacts, likelihood/frequencies, and the level of controls. Best people to answer these questions are those who are involved. Without their input, it easy to ignore potential courses of action, or even be completely mistaken on the potential impacts of the risk or benefits of the potential risk mitigation actions.

While risk matrices are only a starting point for decision support, inclusive analysis tools are very flexible tools that can be used to support decision making in many ways depending on the context. For instance, in making investment decisions, one could rely on multi-criteria decision making (MCDM), which can enlighten the pros and cons of the competing alternatives and create a structured dialogue process. It all starts from understanding the type of contribution that your colleagues have on decision-making, risk analysis, or controlling those risks. This is achieved by engaging and listening and with the modern tools that can support shared learning and the analysis process.

To conclude

To summarize, some of this criticism has valid points, but as one can see, there’s a lot that you can do with risk matrices when you add an inclusive process to support them. After all it’s about how you engage and communicate with your colleagues, how you collect information that is relevant, and what is the quality of that information. And again: how do you communicate it to your colleagues for them to draw conclusions and take action. And what is often more difficult: how do you act and understand the situation together as a group. For those purposes, inclusive and dynamic risk matrices are irreplaceable tools. A static risk matrix in a spreadsheet rarely gets the job done.

Author

Valtteri Frantsi
Partner & Head of Customer Success
Inclus Ltd
valtteri.frantsi@inclus.com

REFERENCES

Douglas, Heather, 2000, “Inductive Risk and Values in Science” in Philosophy of Science Vol. 67, No. 4, pp. 559-579, The University of Chicago Press

Douglas, Heather, 2009, Science, Policy, and the Value-Free Ideal, University of Pittsburgh Press

Hubbards, Douglas W., 2020, The Failure of Risk Management, Wiley, New Jersey

Koskinen, Inkeri, forthcoming, “Participation and Objectivity” in Philosophy of Science, Cambridge University Press

Deloitte, “Enterprise risk management in Finland”, 2022

Previous
Previous

Benefits of inclusive risk matrices