Improving risk registers with an AI Assistant 

Risk registers can become cluttered and outdated over time — filled with vague wording, duplicate entries, and missing key risks. One of Inclus’ team members, Alexander Westergård, explored how AI could support and enhance the risk management process as part of his Master’s thesis at Aalto University Systems and Operations Research program. 

Alexander’s research focused on building and testing an AI assistant that helps to clean, complete, and harmonize risk registers — while keeping human experts firmly in control. The results were both practical and promising. 

Case study with Inclus 

Working with Inclus as the case example, Alexander and the team managed to improve our own ERM risk register using an AI assistant in participatory risk management settings. Here’s what the experiment yielded: 

  • 58 → 35 risks: The AI assistant helped reduce the number of risks without losing meaningful content. 

  • 30 overlaps merged into 9: Redundant risks were consolidated into well-defined entries. 

  • 8 risks removed, 6 added: The register became both leaner and more accurate. 

  • Participant feedback: The final register was seen as more complete, precise, and up-to-date. 

How the AI assistant was utilized 

At the core of the AI assistant are the latest large language models. However, just relying on language models is not enough. While they contain massive amounts of information, they often lack case-specific knowledge. Our solution was to give AI the context with all the information it needed. Core sources were:  

  • Full access to the latest risk register, including descriptions, scores, comments, and tasks. 

  • A Retrieval Augmented Generation (RAG) knowledge base with thousands of pages of company information. 

  • Direct interaction with users, who guided the model using their expertise and tacit knowledge. 

Importantly, the assistant was designed with safeguards. It doesn’t make changes on its own — it only suggests edits, which users must approve. 

While risks are speculative by nature, they still require reasoning. AI excels at generating plausible scenarios with high accuracy and backing them with relevant information. Human oversight ensures that only the most relevant risks make it into the final register. 

Improved collaboration 

AI not only makes risk management more efficient, but it also allows more interaction between the stakeholders. Inclus is used as a tool in many of the most challenging conflict resolution efforts around the world, where achieving mutual understanding is not easy, yet it’s very necessary for advancing sustainable peace. New AI capabilities can improve mutual understanding among humans.  

  1. AI helps people understand each other better. Due to the semantic nature of the LLMs, they are language agnostics, meaning they process the meaning of the information rather than the letters. This allows more efficient communication among people speaking different languages and with varying degrees of self-expression.  

  2. AI allows a broader audience to be involved in the risk management process. Large language models are beneficial for language-related tasks, and thus, for the first time in human history, qualitative data can be processed in high quantities with high efficiency. Large amounts of data can be collected, which can then be summarized in an easily analyzable form. Overlapping comments do not matter, as those can be easily combined.  

 

The insights shared here are based on Alexander Westergård’s master’s thesis in the Systems and Operations Research program at Aalto University. If you're curious to learn more or want to discuss the topic further, feel free to reach out to him at alexander.westergard@inclus.com

 

Previous
Previous

How OECD leverages AI and collaboration to understand emerging global risks 

Next
Next

Transforming risk management with a holistic framework and AI integration