Risk Based Monitoring (RBM) Lessons From Research Predicting Autism In Infants

By Penny Manasco - June 5, 2018

I recently read an interview with Oren Miron, a biomedical informatics research associate at Harvard and winner of the 2017 Next Einstein Competition researcher.  He repurposed a standard technology used to test an infant’s hearing for autism.  
Research from the 1970’s by Professor Hildesheimer showed that children with autism had a consistently delayed response to Auditory Brainstem Response (ABR) tests.  Autism was thought to be a disease of the frontal cortex, so this finding was “lost” in many ways until Miron found the published research.  
MRI had become the method of diagnosis.  A fine tool but prohibitively expensive to use as a screening tool.
Miron looked for a low-cost way to screen for autism and found it in a test now used routinely to test for hearing loss.   He screened thousands of infants for hearing loss and compared the results with the data on subsequent autism diagnosis and found a consistent response.  He is now searching for funding to conduct a prospective trial to validate his findings.
Finding children with autism early allows intervention to begin years earlier than 4 years old, the average age a child is diagnosed with autism.  This can significantly enhance a child’s life by what can be accomplished through early intervention.
This teaches us RBM lessons as follows:
  1. The tools to conduct RBM are in place and the data are available, but it is HOW you look at the data that affects what you find.  The tools to collect and analyze the critical data (e.g., EDC/eSource, ePRO, CTMS, Protocol-Specific reporting) are available—just like the ABR responses unique to autism were there—it took an inquisitive, scientific mind to “see” the pattern and recognize its significance.
  1. Scientific discipline is needed to evaluate the utility of new tools.  Miron used pre-existing data to conduct the first hypothesis testing, which he is following with prospective trials.  

We, in the clinical research arena, need to use the same scientific discipline to evaluate the optimal way to conduct oversight of trials.  We can’t keep using the methods, such as SDV, as our only means to evaluate trial quality. SDV has been proven to be ineffective; a position reiterated in Guidance by the Regulatory authorities.  We owe it to scientists everywhere to use the same scientific rigor in conducting clinical trials to prove the efficacy and safety of their important scientific discoveries and to help achieve the ultimate goal of improving patients’ lives.
We need to implement analytic approaches to determine whether the experiment (protocol) was performed correctly.  Did the right person, trained to conduct the experiment, conduct the experiment?  Were controls included for the primary endpoints?  In scientific experiments, we include positive and negative controls to be sure the experiment was performed correctly.  In clinical trials, placebo or active controls are often used for this purpose.  But research scientists review all aspects of the experiment (e.g., reagents, procedures, and analysis approaches) before expressing confidence in the results.  We, in clinical research, need to adopt and use the same approach.
MANA RBM is committed to advancing the scientific discipline of clinical research. We have, and will continue to validate and publish the findings and analysis of our proprietary methods for trial oversight. Links to the Journals and copies are/will be available on our MANARBM.com website.
We have previously published data on approaches superior to SDV and eCRF review, remote Informed Consent Review, Site Responses to Paperless trials, using electronic Investigator Site Files for Paperless trials, and monitor competencies for RBM.  Please contact us if you would like a copy of any of these papers or a link to it.
Stay tuned; we have many interesting papers that will be published in the near future, including a prospective comparison of SDV and a remote trial management approach.
Please join the mailing list on our website so we can notify you when these important papers are published and released.  Join us, also, as MANA RBM pioneers the field of Clinical Operations as a scientific discipline in lockstep with the scientists designing and developing our new treatments.

New FDA Guidance on Using Image Data for Primary Endpoints Shows Importance of RBM

By Penny Manasco - May 19, 2018

The FDA recently released Guidance on using image data for primary endpoints. (Clinical Trial Imaging Endpoint Process Standards. April 2018. https://www.fda.gov/downloads/drugs/guidances/ucm268555.pdf). This Guidance highlighted the important process of collecting and analyzing image data.
In the past, oversight of image analysis was limited to the interpretation data by the reviewer.  In this Guidance, the FDA clearly illustrates how many aspects of the process can affect the ultimate analysis.
The Guidance discusses many aspects of capturing and analyzing the images and how they can affect the final interpretation of the study results.  If images are not captured consistently, then the interpretation can be compromised.  If images are not collected at the correct timepoints, endpoints such as Progression Free Survival cannot be correctly identified.  If images are not read by a trained reader who is unaware of the treatment and the stage of treatment, then bias can be introduced, which can ultimately affect trial integrity.  Finally, if the correct aspects of the image are not identified consistently and correctly for analysis of endpoints, such as the RECIST score, then the results cannot be interpreted correctly.  
Guidance documents like this contain process improvements that should be incorporated into the Risk Assessment process.  For instance, after reading this guidance, processes may need to be modified to add a training aspect for the person interpreting the efficacy endpoint.  Process oversight should confirm that the correct person completed the analysis and had the proper training and authority to do so. Minimizing the possibility of bias is another area that can be addressed as part of Risk Assessment.  How will you minimize the risk, and how will you know if bias occurs?
Risk Based Monitoring was designed to help Sponsors, CROs, and Sites focus on the important aspects of trial conduct.  This Guidance document reaffirms the FDA’s position on RBM to focus on specific processes, usually identified in the Risk Assessment, rather than merely conduct SDV. Assuring that primary endpoints are correctly collected by evaluating the processes used to minimize bias and assure the highest quality data takes time and effort, but ultimately is a better use of scarce oversight/monitoring resources.  
For more information on MANA RBM’s Risk Assessment Service and how to integrate quality oversight and Risk Based Monitoring into your next trial, contact MANA RBM.  1-919-556-9456 or Dr. Manasco (pmanasco@manarbm.com).

The RBM Scoop from SCOPE (Summit of Clinical Operations Professionals) 2018

By Penny Manasco - February 23, 2018

I just returned from the SCOPE meeting in Orlando.  I attended to learn the “state of the industry” on Risk Based Monitoring (RBM) and data analytics.  If you are not familiar with this conference, there is a caveat.  Only Sponsors and Exhibitors are allowed as speakers; this slants the view.  Still, there was a lot to learn. 
SCOPE Surprises:
  1. Most CROs that presented and that I talked to (primarily large CROs) implemented RBM within their monitoring group without incorporating the data strategies for eSource that were released and finalized in 2013 concurrently with the RBM Guidance.  These include the guidance to collect data using the principles of ALCOA (Attributable, Legible, Contemporaneous, Original, Accurate) that allow immediate access to data for central review without waiting for transcription.  Takeaway:  If you don’t include the data strategy, you will always be saddled with those pesky source documents that hamper adoption of a true Risk-Based monitoring approach. The greatest benefit is gained by a combined data collection and oversight strategy.
  2. Many big Pharmas that contributed to the Transcelerate Guidance for RBM outsource their trials to the big CROs.  This means the methods recommended are primarily based on the CRO adoption model, which still has a large component relying on onsite visits.  This isn’t surprising because onsite visits are the major source of revenue for CROs.  Takeaway: there are many approaches to adopting RBM and the Transcelerate approach may not be the best for you.     
  3.  While not discussed much, the RBM group highlighted three areas for focus: 
    1. Recognizing the importance of senior leadership and change management in the adoption of RBM
    2. Revising the Request For Proposal (RFP) process to reflect the new model for trial oversight. Many/most RFP’s still ask for estimates based on SDV, estimates of time to do CRF review, and other aspects of trial management which do not incorporate the aspects of RBM oversight needed. Takeaway: Working with your Outsourcing group is critical. The “buckets” of spending will differ and the RFP should be flexible enough to recognize these process changes.
    3. Revising the trial management model to focus site performance on critical issues, protocol compliance, and subject safety instead of questionable metrics such as number of pages SDV’ed and number of queries open.  Takeaway: This is a good first step we take even further.  It is critical to train Trial Managers to understand quality management (i.e., identify critical issues, perform root cause analysis, and evaluate the resolution of issues) for the industry to truly adopt the principles of RBM.
  4. The love affair with Key Risk Indicators (KRI) may be waning (YEAHHH!).  IQVia noted (what we, and others, have identified) that there is a lot of noise with KRIs.  We have also found KRIs are blunt, lagging indicators and outliers are identified much later than using other methods to identify high-risk sites.  If you wait for a KRI to become critical, you have missed a big opportunity to prevent errors rather than correct them.
The Abbott Nutrition RBM team provided a wonderful example of adopting RBM within a company.  They discussed senior leadership championship of the program, the importance of focusing on change management in the process, and used a cross-functional team including Clinical, Data Management, and Statistics in their programs.  Kudos to the presenters: Sonya Verrill, Geraldine Baggs, Ph.D., Dione Smart, Xiaosong (Sue) Zhang MS, MAS, and their whole team.
Yiwen Sun from Samumed also presented a small company’s approach to implementing ICHE6(R2) and RBM. 
Nicole Stansbury from PPD provided real life examples of its RBM adoption; saying they found a 16% decrease in critical major findings, 17% better detection of significant deviations, 12-15% decrease in monitoring costs with a decrease of 6-15% in total trial costs even with Central Monitoring Costs.  
Finally, a plea from me:  I am a member of an Expert Advisory Panel to the FDA on RBM.  The FDA wants to identify and understand the barriers to RBM adoption.  This is a great opportunity to make your voice heard.  Please complete a brief survey at www.surveymonkey.com/r/RBMBarriers.  Please forward this link to all your colleagues who are also interested in letting the FDA panel know their opinions.

Barriers To Adopt and Implement RBM-Make Your Voice Heard!

By Penny Manasco - February 16, 2018

I am a member of an Expert Advisory Panel advising the FDA on barriers to adopt and implement Risk Based Monitoring (RBM), eSource, and ICH E6(R2).
To provide a more comprehensive voice from the industry to the FDA Advisory Panel about the barriers to adopt and implement RBM, I developed a survey to learn your opinions and challenges in these areas.  The results provided will be anonymous.  If you participate and want a copy of the results, I’ll send you a summary of the findings from the survey. 
Now is your opportunity to make your voice heard!
The link to the survey is:    www.surveymonkey.com/r/RBMBarriers


Metrics, Schmetrics—RBM and ICHE6(R2) require Study-Specific Analysis for Adequate Trial and CRO Oversight

By Penelope Manasco - October 31, 2017

I just finished listening again to a great presentation by Rajneesh Patil and Zabir Macci from Quintiles, “Achieve Better Outcomes with New RBM Technology Enhancements”.
Two comments Rajneesh made really struck home.  When asked how their team uses metrics, he said that metrics were not particularly useful because they were retrospective and passive.  The real need was for analytics
Analytic tools must enable the review of Study-Specific critical data and processes (e.g., efficacy, safety, IP management, and Human Subject protection) for each study. These analytics cannot be standardized, but should be designed based on the Risk Assessment for the specific trial.
If a technology vendor tells you its tool requires all data and reports to be standardized: Run, don’t walk away.  Your solution (and oversight of your trial) must include reviewing critical study- specific data and processes. 

Ask any C level executive involved in developing medical products about the importance of data integrity. They will tell you data integrity (i.e., data withstanding regulatory scrutiny) is critical.  Human Subject protection and early identification of issues that will “sink” a trial or a program will also top their critical issues list.   That is where the study team needs to focus—and these areas are not easily measured by any KPI. 
Metrics such as “Query rate” and “Number of queries open for >90 days” may be meaningless.  Understanding the underlying data is not.  For instance, queries are labor-intensive and costly for both the CRO and the sites.  Do you know if the queries that are open for longer than 90 days were appropriately opened?  Do you know if the query was even needed (i.e., query was raised on a critical data field versus a field where the data are not considered critical to analysis of subject safety)? Query rate may be complicated by multiple queries on the same field—giving spurious data on query rates and whether a site even has an issue with queries.   
The second important comment Rajneesh made speaks to the way we review trial data:  they found 4x better quality (decreased errors in critical data fields) in the trials that used analytics vs those that used SDV—and if you think about it, remote eCRF review in isolation should be considered the same as SDV.  Why you ask? 
How data are presented makes a huge difference in what findings or trends you are able to analyze.  With eCRF review, CRAs review each page in isolation: form by form, visit by visit.  They do not compare data across visits, across datasets, and across sources.  It is easy to miss trends if you don’t see the data across datasets, data sources,  and across times. 
We have developed simple, rapidly-implemented (even for ongoing trials) study-specific tools to enhance and ensure the comprehensive review of critical data and processes.
Please contact me to learn more about how to implement this critical component of trial oversight.