Return To Blog

Risks of Key Risk Indicators

Be sure you know what the data are and aren't telling you

By Penelope Manasco - April 18, 2019

Our team and collaborators continually evaluate the myriad Risked Based Monitoring (RBM) methods to better understand their strengths and weaknesses.  

Key Risk Indicators (KRIs) are often touted as an excellent approach to conduct RBM.  While many Sponsors and CROs have adopted them, major issues have surfaced concerning KRIs ability to identify important areas of risk that can critically affect the integrity of the study.

For many, using KRIs is considered “a good first step”.  It’s an improvement when compared with no overall trend analysis.  

It’s important to note, however, recent data we identified brings those ideas into question.  Here’s why:
  1. Non-specific KRIs such as Deviation Counts or even Deviation Rates (corrected for the number of subjects randomized) may miss critical, low incidence deviations.  If all deviations are considered together, the critical, low incidence deviations may be completely masked by the higher rates of all deviations.  Therefore, a critical finding that could affect the integrity of the data is completely missed due to the different incidence rates.  

Example: One site may miss the correct time frame for measuring RECIST, a critical efficacy measure used in solid tumor oncology trials.  Efficacy measures are often time to progression. If the imaging is not taken at the correct interval, the time to progression cannot be accurately interpreted. This deviation can result in loss of integrity for all the efficacy data at that study site and put a subject at undue risk (because their data cannot be used for the trial endpoint).  This could be a low incidence deviation when compared to all other deviations for the trial, but it still represents a critical quality finding. 
  1. KRIs often miss study specific process errors that can affect trial quality, particularly if they occur across different databases or require the use of audit trails. 

Example:  Many studies (e.g., CNS studies, dermatology studies) require the use of assessment tools administered by a trained investigator.  KRIs are not designed to specifically evaluate whether the primary assessment was conducted by the correct person and that the person was appropriately trained.  These data to conduct this assessment are usually in separate databases and may include the audit trail.  Standard KRIs will miss this critical protocol-specific process error.  Missing this critical finding may result in censoring multiple subjects where the endpoint was not correctly completed and documented.  
  1. Most KRIs require significant data to accrue before a KRI is increased to a higher risk level.  This means that the event of concern may be repeated numerous times before the signal (risk level increase) is sufficient to identify the increased risk.   The corollary to this is that KRIs are also not sensitive to correction of errors. Once a KRI has been elevated to high risk, it is difficult or impossible to identify when a correction has been made.
  1. KRIs are dependent on the stage of each subject in the study.  Often adverse event rates are used as a quality KRI but comparisons across sites are dependent on the phase of the study for each site.  

Example: If subjects from one site have already received study drug treatment, they should not be compared to another site whose subjects have not yet received treatment.  A corollary to this would be comparing two sites, one whose subjects have been treated for nearly a year with a site whose subjects are newly treated.  Rates of adverse events will likely be different based on timing alone. 
  1.  KRIs vary based on the underlying analytic and data structure.   Simple data summaries often assume that the data have an underlying normal distribution but without direct examination of this assumption (e.g., via scatter plots or distribution plots), overall conclusions may be incorrect and important discrepancies may be missed.
Example:  If you measure counts of an event, you need to know whether the counts                    have been corrected for the number of subjects and what correction was applied. For instance, the subject correction factor for screen failures is different than that of early terminations.  Standardizing the data based on the number of subjects, however, may artificially increase the rate at sites with small numbers of subjects.  Another approach, using Z scores (corrected for the distribution of the data across the sites) may provide a better understanding of true outliers, but the caveats above still hold.  


We have found that using protocol-specific analytics to immediately identify when critical data and process errors (i.e., errors that matter) occur is more efficient and effective than more non-specific methods such as KRIs and statistical outlier identification.  Protocol specific errors that matter analysis does not depend on meeting an arbitrary number of events.  Nor does it require extensive research to find the critical events that is required with KRIs.

With our new patent-pending, first to market clinical quality management software, REACHERTM the process is fast and easy, requiring minimal Sponsor resources. Protocol-specific errors that matter, issues that matter, and safety triggers are analyzed daily with critical findings arriving in your email when they occur.

Please let me know your experiences and thoughts about using KRI’S and if you are interested in Risk Based Monitoring using more sensitive and specific methods.