(Word count: 414)
In the previous blog post in our Risk Management series, we looked at a simplified process for identifying risks by outlining critical study factors (patient safety and critical data) and evaluating the conditions that increased the likelihood of a negative outcome. Study teams typically use a tool like the RACT (Risk Assessment Categorization Tool) to prompt them to evaluate every aspect of the study. I've seen study teams generate as many as fifty potential risks using this tool. If you consider everything, you reduce the risk (speaking of risks!) of missing something; however, you increase the risk of generating a lot of noise.
The classic methodology to cut through the noise involves ranking those risks by severity of impact (how bad it would be); likelihood of occurrence (how likely it is to happen, with the idea being that if it's pretty likely you should worry about it more than if it's unlikely); and likelihood of detection (that's detection by the study team, not by a regulatory authority, with the idea that if it's likely to slip by unnoticed, you should worry about it more). You then add or multiply the scores and end up with a "red," "yellow," or "green" assessment for each risk.
If you start with a list of fifty potential risks and apply three different evaluations to each, that's.... a pretty long study team meeting where everyone is putting their Zoom on mute to check their Twitter feed. You end up with many medium and low risks and a few highs, which you probably could have pinpointed without completing fifty calculations.
I'm loathe to recommend shortcuts, especially when this methodology is outlined right in ICH GCP E6 R2 step 5.0.3; however, there might be a more value-added way to recognize the risks that should rise to the top. For example, you might take a holistic approach by force-ranking the risks as a team, or having individual members force-rank the risks and then share their results, which gives the team an opportunity to discuss their different impressions of why certain risks might be more significant.
The bottom line is that if an overwrought methodology is going to interfere with your team's ability to have a meaningful discussion about risk, it's not the right methodology, even if it is the "correct" one. 'Tis a far, far better thing to mitigate five critical risks than to identify fifty risks and fail to follow up on them.