Back to Blog

The risk matrix: why the most used tool in risk management is also the most criticised

RC

Risk Companion

May 5, 2026
9 min read

Key Takeaways

  • A risk matrix plots probability against impact to produce a risk score, but two risks with identical scores can require completely different management responses that the matrix cannot distinguish between.
  • Research shows that risk matrices have structural limitations: they can rank risks in the wrong order, assign identical scores to risks with dramatically different financial profiles, and fail to capture the cumulative effect of multiple risks materialising simultaneously.
  • The most common matrix failure is treating a coloured grid as the end of the risk conversation rather than the beginning, regardless of how well the tool itself is designed.
  • Bow-tie diagrams and probabilistic analysis do not replace the risk matrix but answer the questions the matrix is not designed to handle, such as root causes, consequence chains, and realistic financial exposure.

Almost every organisation uses a risk matrix. It is the first thing that appears in any risk workshop, the default output of any risk assessment, and the centrepiece of most risk registers. Walk into a board presentation, a safety review, or an ISO audit, and there it is: a five-by-five grid, colour-coded from green to red, with dots scattered across it like a threat map.

And yet the risk matrix is also one of the most criticised tools in the field, with researchers documenting specific, technical failures and practitioners writing at length about how it distorts judgment. Some have called for it to be abandoned entirely.

The honest answer is that it is neither indispensable nor misleading in absolute terms, and understanding why requires looking clearly at both sides.

A risk matrix (sometimes called a risk assessment matrix or, in industry shorthand, a probability and impact matrix) is a visual tool that plots identified risks on a grid based on two dimensions: the probability that a risk event occurs, and the impact if it does. The resulting risk score, calculated by multiplying probability by impact, determines where a risk sits on the matrix and what colour zone it falls into: green for low, amber for moderate, red for high. It gives teams a fast, visual way to prioritise which risks need attention first.

Used well it serves a real purpose, but used poorly (which describes most implementations) it creates a sense of control that does not exist.

Why the risk matrix became the default

The appeal is easy to understand. The risk matrix takes something genuinely difficult (how bad is this risk, really?) and converts it into something legible: a number, a colour, a position on a grid. It creates a shared language. It produces a document you can show an auditor or a board.

For teams without a dedicated risk officer (which describes most growing and mid-sized businesses), the matrix is often the first structured approach to risk anyone has used, and for that purpose it has real value. Getting a team in a room, naming risks, and forcing a conversation about probability and impact is better than not having the conversation at all.

The 5x5 risk matrix in particular has become a standard because it offers enough granularity to make meaningful distinctions without becoming so complex that people give up filling it in. A 3x3 matrix collapses too many risks into the same zone. A 10x10 creates false precision nobody can defend. The 5x5 is a reasonable compromise.

But "reasonable compromise" is not the same as "reliable tool." And that distinction is where most organisations quietly get into trouble.

What the research actually says is wrong with it

The academic critique of the risk matrix is not vague. In a widely cited 2008 paper published in Risk Analysis, the author showed that risk matrices can violate basic principles of rational decision-making, with two risks carrying dramatically different expected losses potentially landing in the same colour zone and risks being ranked in the wrong order. Teams end up focusing on the wrong things, not because they made bad judgments, but because the tool itself produces structurally misleading outputs.

The core problem is this: a probability score of 3 and an impact score of 4 gives a risk score of 12. But so does a probability of 4 and an impact of 3. On most matrices, these land in the same cell or the same colour band. But they are not the same risk. A high-probability, lower-impact event requires a completely different management response than a low-probability, catastrophic-impact event. The matrix erases that distinction.

There is also the problem of subjectivity dressed up as measurement. When a team rates a risk as "3 out of 5 for probability," that number is not based on observed frequency data. It is a group's collective gut feeling, shaped by recent experience, cognitive biases, and (often) a desire not to alarm anyone. One person's "unlikely" is another person's "it happened twice last year." The matrix converts that uncertainty into a precise-looking score, which gives the output a weight it has not earned.

A 2024 study in Humanities and Social Sciences Communications made a similar point in the context of project risk management. The authors identified several limitations of probability-impact matrices and proposed Monte Carlo simulation as a more reliable method for prioritising project risks, precisely because it captures what a colour-coded grid cannot.

The pattern that repeats itself

Take a construction company with around 80 employees. A near-miss incident occurs on site and the risk register gets opened, probably for the first time in months. The matrix has 34 risks on it. Thirty-one are rated amber. Two are green. One is red: the risk that just nearly caused a serious injury. It has been on the matrix for eight months with no owner, no measures attached, and a review date that passed six weeks earlier.

The matrix did not fail them because of its shape or size, but because the team had used it as a filing exercise rather than a management tool. Once a risk was rated and coloured, the conversation stopped. Nobody asked: what is actually preventing this? Who is accountable? What happens if the measure fails?

This pattern is not unusual. The matrix becomes a photograph of risk at one point in time rather than a living view of how risks are being managed. And because it looks thorough, all those rows, all those scores, it creates confidence that is not warranted.

If your risk matrix disappeared tomorrow, would anything actually change about how your team operates?

What the matrix cannot tell you

The risk matrix answers one question: which risks look most serious right now, based on our current estimates? That is a useful question. But it is not the only question, and for most operational decisions it is not even the most important one.

The matrix does not tell you what is causing a risk. Two risks can share the same score but have completely different root causes, and without understanding those causes you cannot design effective preventive measures.

It does not tell you what the consequences look like in detail. A risk rated 4 for impact could mean a €50.000 loss or a regulatory shutdown, depending on how it plays out. Those require very different responses, and the matrix does not help you distinguish between them.

It does not tell you whether your current measures are working. A risk scored before any measures were applied and a risk scored after measures have been in place for six months might look identical on the matrix. The matrix does not track the gap between where you started and where you are heading.

It also does not tell you your realistic financial exposure. If you have 15 risks on your register and half of them materialise in the same quarter (which is not as unlikely as most teams assume), a risk matrix cannot tell you what that costs you, but probabilistic analysis can.

This is why approaches like bow-tie diagrams and Monte Carlo simulation exist alongside the risk matrix rather than replacing it. A bow-tie makes causes and consequences explicit, separating the measures that prevent a risk from occurring from those that limit damage if it does. A probabilistic risk analysis runs your risk register through thousands of scenarios and produces a confidence interval you can defend to a board, which is something a risk score of 12 cannot do.

How to use the risk matrix without fooling yourself

None of this means you should abandon the risk matrix. For most teams it remains the most practical starting point for qualitative risk assessment: easy to explain, easy to use in a workshop, and easy to communicate upward. The goal is not to replace it but to be honest about what it is and is not doing.

A few things that make a meaningful difference:

Name an owner for every risk. A risk with no named owner is not being managed. The owner does not have to be the person who identified the risk, they need to be the person accountable for ensuring measures are in place and working. A risk register that shows every risk with a named owner and a clear next step is more valuable than a beautifully colour-coded matrix where nobody knows who is responsible.

Separate inherent risk from residual risk. Score risks before your measures are applied (inherent risk) and then again based on the effect you expect those measures to have (residual risk or target assessment). The gap between the two tells you how much work your measures need to do, and whether they are actually doing it. This is the kind of gap analysis that auditors care about and that most matrices ignore entirely.

Set review dates and keep them. A risk matrix that has not been updated in six months is not a risk tool. It is a historical document. Build review dates into the process, assign reminders, and treat an overdue review as an active management failure.

Do not let the colour do all the work. Red risks get attention. Green risks get ignored. Amber risks drift, while that is where most risks end up. Build a discipline of regularly reviewing amber risks, because that is where the slow-moving problems accumulate. The near-miss that should have been a red flag often spent months quietly sitting in amber while nobody looked closely enough.

Supplement with tools that go deeper. For your highest-priority risks, the matrix should be the beginning of the analysis, not the end. Bow-tie diagrams show the causal chain and the barriers. Assessments that track your progression from initial scoring to target state show whether management is making a difference. And for risks with significant financial exposure, visualisation tools give you something more defensible than a colour.

Risk Companion's risk matrix gives you the visual overview: where risks cluster, which are red, and which need immediate attention. But the platform is built on the premise that the matrix is one input rather than the whole answer. Automated health checks flag missing owners and overdue measures, gap analysis tracks the distance between your initial assessment and your target state, and bow-tie diagrams sit alongside the matrix for any risk where causes and consequences need to be made explicit. The result is a risk management process where the matrix starts the conversation rather than ending it.

The real problem is not the tool

We have seen teams with elaborate, carefully maintained risk matrices who have no real grip on their risk exposure, and teams with a simple, honest list of ten risks who manage them with genuine rigour and accountability. The difference is never the sophistication of the scoring system but whether the risk conversation is a real conversation or a compliance exercise.

The risk matrix gets criticised because it is easy to do badly and easy to mistake busyness for management. Rate the risks, colour them in, file the document, tick the box: that sequence describes a lot of what passes for risk management in practice, and the matrix, by providing a satisfying output at the end of the process, makes it easy to stop there.

The fix is not to throw away the matrix but to treat it as what it is (a prioritisation tool that starts a conversation) and then actually have the conversation. Who owns this? What are we doing about it? Is it working? When do we check again?

Those questions do not require a more sophisticated tool. They require a different attitude toward what risk management is actually for.

Frequently Asked Questions

A risk matrix is a visual tool that plots identified risks on a grid based on two dimensions: the probability that a risk event occurs, and the impact if it does. The resulting risk score — typically probability multiplied by impact — determines where a risk sits on the matrix and what colour zone it falls into. It gives teams a fast way to prioritise which risks need attention first.

Ready to improve your risk management?

See how Risk Companion can help you implement these best practices with powerful, easy-to-use tools.

Request a Demo