The RBM Scoop from SCOPE (Summit of Clinical Operations Professionals) 2018


By Penny Manasco - February 23, 2018

I just returned from the SCOPE meeting in Orlando.  I attended to learn the “state of the industry” on Risk Based Monitoring (RBM) and data analytics.  If you are not familiar with this conference, there is a caveat.  Only Sponsors and Exhibitors are allowed as speakers; this slants the view.  Still, there was a lot to learn. 
 
SCOPE Surprises:
  1. Most CROs that presented and that I talked to (primarily large CROs) implemented RBM within their monitoring group without incorporating the data strategies for eSource that were released and finalized in 2013 concurrently with the RBM Guidance.  These include the guidance to collect data using the principles of ALCOA (Attributable, Legible, Contemporaneous, Original, Accurate) that allow immediate access to data for central review without waiting for transcription.  Takeaway:  If you don’t include the data strategy, you will always be saddled with those pesky source documents that hamper adoption of a true Risk-Based monitoring approach. The greatest benefit is gained by a combined data collection and oversight strategy.
  2. Many big Pharmas that contributed to the Transcelerate Guidance for RBM outsource their trials to the big CROs.  This means the methods recommended are primarily based on the CRO adoption model, which still has a large component relying on onsite visits.  This isn’t surprising because onsite visits are the major source of revenue for CROs.  Takeaway: there are many approaches to adopting RBM and the Transcelerate approach may not be the best for you.     
  3.  While not discussed much, the RBM group highlighted three areas for focus: 
    1. Recognizing the importance of senior leadership and change management in the adoption of RBM
    2. Revising the Request For Proposal (RFP) process to reflect the new model for trial oversight. Many/most RFP’s still ask for estimates based on SDV, estimates of time to do CRF review, and other aspects of trial management which do not incorporate the aspects of RBM oversight needed. Takeaway: Working with your Outsourcing group is critical. The “buckets” of spending will differ and the RFP should be flexible enough to recognize these process changes.
    3. Revising the trial management model to focus site performance on critical issues, protocol compliance, and subject safety instead of questionable metrics such as number of pages SDV’ed and number of queries open.  Takeaway: This is a good first step we take even further.  It is critical to train Trial Managers to understand quality management (i.e., identify critical issues, perform root cause analysis, and evaluate the resolution of issues) for the industry to truly adopt the principles of RBM.
  4. The love affair with Key Risk Indicators (KRI) may be waning (YEAHHH!).  IQVia noted (what we, and others, have identified) that there is a lot of noise with KRIs.  We have also found KRIs are blunt, lagging indicators and outliers are identified much later than using other methods to identify high-risk sites.  If you wait for a KRI to become critical, you have missed a big opportunity to prevent errors rather than correct them.
 
 
The Abbott Nutrition RBM team provided a wonderful example of adopting RBM within a company.  They discussed senior leadership championship of the program, the importance of focusing on change management in the process, and used a cross-functional team including Clinical, Data Management, and Statistics in their programs.  Kudos to the presenters: Sonya Verrill, Geraldine Baggs, Ph.D., Dione Smart, Xiaosong (Sue) Zhang MS, MAS, and their whole team.
 
Yiwen Sun from Samumed also presented a small company’s approach to implementing ICHE6(R2) and RBM. 
 
Nicole Stansbury from PPD provided real life examples of its RBM adoption; saying they found a 16% decrease in critical major findings, 17% better detection of significant deviations, 12-15% decrease in monitoring costs with a decrease of 6-15% in total trial costs even with Central Monitoring Costs.  
 
Finally, a plea from me:  I am a member of an Expert Advisory Panel to the FDA on RBM.  The FDA wants to identify and understand the barriers to RBM adoption.  This is a great opportunity to make your voice heard.  Please complete a brief survey at www.surveymonkey.com/r/RBMBarriers.  Please forward this link to all your colleagues who are also interested in letting the FDA panel know their opinions.
 
 
Read More...

Barriers To Adopt and Implement RBM-Make Your Voice Heard!


By Penny Manasco - February 16, 2018

I am a member of an Expert Advisory Panel advising the FDA on barriers to adopt and implement Risk Based Monitoring (RBM), eSource, and ICH E6(R2).
 
To provide a more comprehensive voice from the industry to the FDA Advisory Panel about the barriers to adopt and implement RBM, I developed a survey to learn your opinions and challenges in these areas.  The results provided will be anonymous.  If you participate and want a copy of the results, I’ll send you a summary of the findings from the survey. 
 
Now is your opportunity to make your voice heard!
 
The link to the survey is:    www.surveymonkey.com/r/RBMBarriers








.../
 
Read More...

Metrics, Schmetrics—RBM and ICHE6(R2) require Study-Specific Analysis for Adequate Trial and CRO Oversight


By Penelope Manasco - October 31, 2017

I just finished listening again to a great presentation by Rajneesh Patil and Zabir Macci from Quintiles, “Achieve Better Outcomes with New RBM Technology Enhancements”.
 
Two comments Rajneesh made really struck home.  When asked how their team uses metrics, he said that metrics were not particularly useful because they were retrospective and passive.  The real need was for analytics
 
Analytic tools must enable the review of Study-Specific critical data and processes (e.g., efficacy, safety, IP management, and Human Subject protection) for each study. These analytics cannot be standardized, but should be designed based on the Risk Assessment for the specific trial.
 
If a technology vendor tells you its tool requires all data and reports to be standardized: Run, don’t walk away.  Your solution (and oversight of your trial) must include reviewing critical study- specific data and processes. 

Ask any C level executive involved in developing medical products about the importance of data integrity. They will tell you data integrity (i.e., data withstanding regulatory scrutiny) is critical.  Human Subject protection and early identification of issues that will “sink” a trial or a program will also top their critical issues list.   That is where the study team needs to focus—and these areas are not easily measured by any KPI. 
 
Metrics such as “Query rate” and “Number of queries open for >90 days” may be meaningless.  Understanding the underlying data is not.  For instance, queries are labor-intensive and costly for both the CRO and the sites.  Do you know if the queries that are open for longer than 90 days were appropriately opened?  Do you know if the query was even needed (i.e., query was raised on a critical data field versus a field where the data are not considered critical to analysis of subject safety)? Query rate may be complicated by multiple queries on the same field—giving spurious data on query rates and whether a site even has an issue with queries.   
 
The second important comment Rajneesh made speaks to the way we review trial data:  they found 4x better quality (decreased errors in critical data fields) in the trials that used analytics vs those that used SDV—and if you think about it, remote eCRF review in isolation should be considered the same as SDV.  Why you ask? 
 
How data are presented makes a huge difference in what findings or trends you are able to analyze.  With eCRF review, CRAs review each page in isolation: form by form, visit by visit.  They do not compare data across visits, across datasets, and across sources.  It is easy to miss trends if you don’t see the data across datasets, data sources,  and across times. 
 
We have developed simple, rapidly-implemented (even for ongoing trials) study-specific tools to enhance and ensure the comprehensive review of critical data and processes.
Please contact me to learn more about how to implement this critical component of trial oversight.
   
    
 
 
Read More...

A Smart Bandage and RBM


By Penny Manasco - October 23, 2017

We’ve all heard the saying, “Out of the mouths of babes and minds of teenagers (oft times comes gems)”.  Ok, well I made up that last part about the “minds of teenagers” but here’s why.
 
Anushka Naiknaware, a seventh-grade student, was the youngest person ever to win a top ranking in the International Google Science Fair.  She also won a Global Builder Award from event sponsor Lego for finishing overall in the top-eight winners.
 
Her invention - a smart bandage. Here’s how Ms. Naiknaware’s teenage brain identified a problem and created a solution near and dear to all who embrace the value and virtues of remote monitoring and risk based monitoring (RBM).
 
First, identify the problem.  Major wounds require specific levels of moisture to promote healing but if healthcare providers change a patient’s bandage too often, it can disrupt the body’s healing process.
 
The solution; embed tiny monitors/sensors to monitor moisture levels and alert healthcare providers when the patient’s bandages have dried to a level where it should be changed to optimize the healing process.
 
Sound like RBM?  Absolutely.  Here’s why:  Ms Naiknaware identified the critical data (requirement for specific amounts of moisture for proper healing) and the most important aspects required in the outcome (only change the bandage when the moisture level drops below a critical value).  She then developed a tool to measure the moisture and signal that the bandage needed to be changed.  Her test was not a general test (the bandage had been in place a certain number of days), but rather designed to measure her critical variable, which had a direct effect on the outcome.  By choosing and measuring a specific variable important to the outcome, Ms. Naiknaware was implementing a risk-based approach to monitor the healing process.
 
Ms. Naiknaware deserves kudos for her ideas and invention (in addition to the scholarship and other prizes she won).  Google and Lego also deserve kudos for sponsoring the competition and rewarding ideas that use technology to monitor patients remotely and to reduce safety risks (e.g., delayed healing, infection, etc.).
 
Please let me know if you’ve seen any innovation that promotes remote monitoring and RBM.
 
Read More...

Lessons Learned in Implementing RBM: Data Review for Clinical Trials- What you see now may not be what you need to see.


By Penny Manasco - September 18, 2017

Since the FDA and EMA released draft guidance on Risk Based Monitoring (RBM) and Electronic Source in 2011, Pharmas, Biotechs, and CROs have focused tremendous efforts on determining how these guidances affect monitoring activities onsite and remotely. 
 
Surprisingly, there have not been similar, significant efforts to determine how best to conduct data review.  In general, most monitors still review data using a page-by-page review of the eCRF—which is similar to the approach used in SDV:  review of each data set by visit (e.g., vital signs, then physical exam, then ECG, for visit 1, then a repeat for each subsequent visit). 
 
Unfortunately, this approach does not optimally identify data discrepancies because the data are not formatted in a way to identify data errors. Monitors review the data page by page, so errors or data discrepancies that occur across visits and across subjects at a site cannot be easily recognized.

A second challenge to monitoring critical data is the use of “standard” reports that fail to focus on study-specific data and processes. This results in the use of blunt, lagging indicators to identify study-specific high-risk sites and processes—to disappointing effect.
 
Finally, data review has been focused on internal consistency (i.e., if a data point is outside the expected result—query the result), but does not evaluate the reason(s) why the data were erroneous. 
 
While we commit tremendous efforts to determine how we will analyze trial data for efficacy, we, as an industry, need to spend the same level of effort on:
  1. How to best provide data for review and identify critical data findings specific to a research study or program.  There are many visualization tools that provide standard data visualizations, but they do not provide the study-specific insights truly needed.
  2. How to teach monitors and data managers to become detectives rather than box checkers.  They need to move from observing findings to determining why the events occurred and what methods are needed to fix them (a quality management approach).
 
In a recent presentation, we gave the participants a stack of CRF pages, similar to what monitors are expected to review. Team members had 5 minutes to review the pages and then 5 more minutes to review the data using our proprietary Subject Profile Analyzing Risk (SPAR). SPAR synthesizes data across data sets and across visits—specifically focused on the High Risk Data and Processes identified during Risk Assessment.  The team immediately identified the errors using the SPAR. They were unable to do so just using the eCRF pages. 
 
In a separate data example, we showed that critical data must be reviewed using many different tools and reports.  Critical rating data must be reviewed using at least 4 different reports—each providing a different aspect of quality:  who did the assessments (i.e., were they trained, given the appropriate delegation), when were the assessments done, did the assessments require additional tools (e.g., ePRO, photographs), and finally did the ratings make sense over time.  We were able to show that each aspect identified different errors important to the primary efficacy endpoint.
 
In that presentation, we also provided examples of non-essential trial activities that did not add to the quality of the trial data but added weeks of additional, non-productive work for the study and site teams.  These observations provide the basis for a comprehensive, efficient, study-specific, cost-effective approach to data review. One that can be implemented rapidly with a small well trained staff; saving significant costs while enhancing study quality.   
 
Defining what data are important to evaluate your scientific findings and how best to illustrate the data findings are essential steps toward successfully implementing RBM principles.  As an industry, we need to spend the same amount of time on data presentations for clinical operations as we do to identify, review, and analyze data for submission.

Training cannot be understated or classified as only necessary at study start-up. As an industry, we have trained our monitors to perform at the most basic levels of performance based on Bloom’s Taxonomy of Learning Domains (see below).  We need to move our monitors (just as we have moved children in school) to more advanced cognitive activities. Monitors need to be trained to move from Remembering/Understanding to Analyzing and Evaluating.   
 

 
Figure 1: Bloom’s Taxonomy of Cognitive Domains
Please join me in a FREE presentation:” Lessons Learned in Implementing RBM Data Review for Clinical Trials: What you see now may not be what you need to see.”
It demonstrates the power of data presentation and how MANA RBM’s custom, proprietary training methods can move study team members from Remembering to Evaluating data with the resulting enhanced data quality.
 
October 19, 2017
9:00-11:00 a.m. E.S.T.
Congressional Room
N.C. Biotech Center
15 TW Alexander Drive
Research Triangle Park, NC 27709-3547

Research Triangle Park is located between Raleigh, Durham, and Chapel Hill NC, minutes from the Raleigh Durham International Airport.
 
To Register for this free presentation (limited to 100 participants):
https://www.eventbrite.com/e/lessons-learned-in-rbm-implentation-data-review-for-clinical-trials-tickets-37859800683
 
 
Read More...