Return To Blog

Metrics, Schmetrics—RBM and ICHE6(R2) require Study-Specific Analysis for Adequate Trial and CRO Oversight


By Penelope Manasco - October 31, 2017

I just finished listening again to a great presentation by Rajneesh Patil and Zabir Macci from Quintiles, “Achieve Better Outcomes with New RBM Technology Enhancements”.
 
Two comments Rajneesh made really struck home.  When asked how their team uses metrics, he said that metrics were not particularly useful because they were retrospective and passive.  The real need was for analytics
 
Analytic tools must enable the review of Study-Specific critical data and processes (e.g., efficacy, safety, IP management, and Human Subject protection) for each study. These analytics cannot be standardized, but should be designed based on the Risk Assessment for the specific trial.
 
If a technology vendor tells you its tool requires all data and reports to be standardized: Run, don’t walk away.  Your solution (and oversight of your trial) must include reviewing critical study- specific data and processes. 

Ask any C level executive involved in developing medical products about the importance of data integrity. They will tell you data integrity (i.e., data withstanding regulatory scrutiny) is critical.  Human Subject protection and early identification of issues that will “sink” a trial or a program will also top their critical issues list.   That is where the study team needs to focus—and these areas are not easily measured by any KPI. 
 
Metrics such as “Query rate” and “Number of queries open for >90 days” may be meaningless.  Understanding the underlying data is not.  For instance, queries are labor-intensive and costly for both the CRO and the sites.  Do you know if the queries that are open for longer than 90 days were appropriately opened?  Do you know if the query was even needed (i.e., query was raised on a critical data field versus a field where the data are not considered critical to analysis of subject safety)? Query rate may be complicated by multiple queries on the same field—giving spurious data on query rates and whether a site even has an issue with queries.   
 
The second important comment Rajneesh made speaks to the way we review trial data:  they found 4x better quality (decreased errors in critical data fields) in the trials that used analytics vs those that used SDV—and if you think about it, remote eCRF review in isolation should be considered the same as SDV.  Why you ask? 
 
How data are presented makes a huge difference in what findings or trends you are able to analyze.  With eCRF review, CRAs review each page in isolation: form by form, visit by visit.  They do not compare data across visits, across datasets, and across sources.  It is easy to miss trends if you don’t see the data across datasets, data sources,  and across times. 
 
We have developed simple, rapidly-implemented (even for ongoing trials) study-specific tools to enhance and ensure the comprehensive review of critical data and processes.
Please contact me to learn more about how to implement this critical component of trial oversight.