Metrics, Schmetrics—RBM and ICHE6(R2) require Study-Specific Analysis for Adequate Trial and CRO Oversight


By Penelope Manasco - October 31, 2017

I just finished listening again to a great presentation by Rajneesh Patil and Zabir Macci from Quintiles, “Achieve Better Outcomes with New RBM Technology Enhancements”.
 
Two comments Rajneesh made really struck home.  When asked how their team uses metrics, he said that metrics were not particularly useful because they were retrospective and passive.  The real need was for analytics
 
Analytic tools must enable the review of Study-Specific critical data and processes (e.g., efficacy, safety, IP management, and Human Subject protection) for each study. These analytics cannot be standardized, but should be designed based on the Risk Assessment for the specific trial.
 
If a technology vendor tells you its tool requires all data and reports to be standardized: Run, don’t walk away.  Your solution (and oversight of your trial) must include reviewing critical study- specific data and processes. 

Ask any C level executive involved in developing medical products about the importance of data integrity. They will tell you data integrity (i.e., data withstanding regulatory scrutiny) is critical.  Human Subject protection and early identification of issues that will “sink” a trial or a program will also top their critical issues list.   That is where the study team needs to focus—and these areas are not easily measured by any KPI. 
 
Metrics such as “Query rate” and “Number of queries open for >90 days” may be meaningless.  Understanding the underlying data is not.  For instance, queries are labor-intensive and costly for both the CRO and the sites.  Do you know if the queries that are open for longer than 90 days were appropriately opened?  Do you know if the query was even needed (i.e., query was raised on a critical data field versus a field where the data are not considered critical to analysis of subject safety)? Query rate may be complicated by multiple queries on the same field—giving spurious data on query rates and whether a site even has an issue with queries.   
 
The second important comment Rajneesh made speaks to the way we review trial data:  they found 4x better quality (decreased errors in critical data fields) in the trials that used analytics vs those that used SDV—and if you think about it, remote eCRF review in isolation should be considered the same as SDV.  Why you ask? 
 
How data are presented makes a huge difference in what findings or trends you are able to analyze.  With eCRF review, CRAs review each page in isolation: form by form, visit by visit.  They do not compare data across visits, across datasets, and across sources.  It is easy to miss trends if you don’t see the data across datasets, data sources,  and across times. 
 
We have developed simple, rapidly-implemented (even for ongoing trials) study-specific tools to enhance and ensure the comprehensive review of critical data and processes.
Please contact me to learn more about how to implement this critical component of trial oversight.
   
    
 
 
Read More...

A Smart Bandage and RBM


By Penny Manasco - October 23, 2017

We’ve all heard the saying, “Out of the mouths of babes and minds of teenagers (oft times comes gems)”.  Ok, well I made up that last part about the “minds of teenagers” but here’s why.
 
Anushka Naiknaware, a seventh-grade student, was the youngest person ever to win a top ranking in the International Google Science Fair.  She also won a Global Builder Award from event sponsor Lego for finishing overall in the top-eight winners.
 
Her invention - a smart bandage. Here’s how Ms. Naiknaware’s teenage brain identified a problem and created a solution near and dear to all who embrace the value and virtues of remote monitoring and risk based monitoring (RBM).
 
First, identify the problem.  Major wounds require specific levels of moisture to promote healing but if healthcare providers change a patient’s bandage too often, it can disrupt the body’s healing process.
 
The solution; embed tiny monitors/sensors to monitor moisture levels and alert healthcare providers when the patient’s bandages have dried to a level where it should be changed to optimize the healing process.
 
Sound like RBM?  Absolutely.  Here’s why:  Ms Naiknaware identified the critical data (requirement for specific amounts of moisture for proper healing) and the most important aspects required in the outcome (only change the bandage when the moisture level drops below a critical value).  She then developed a tool to measure the moisture and signal that the bandage needed to be changed.  Her test was not a general test (the bandage had been in place a certain number of days), but rather designed to measure her critical variable, which had a direct effect on the outcome.  By choosing and measuring a specific variable important to the outcome, Ms. Naiknaware was implementing a risk-based approach to monitor the healing process.
 
Ms. Naiknaware deserves kudos for her ideas and invention (in addition to the scholarship and other prizes she won).  Google and Lego also deserve kudos for sponsoring the competition and rewarding ideas that use technology to monitor patients remotely and to reduce safety risks (e.g., delayed healing, infection, etc.).
 
Please let me know if you’ve seen any innovation that promotes remote monitoring and RBM.
 
Read More...

Lessons Learned in Implementing RBM: Data Review for Clinical Trials- What you see now may not be what you need to see.


By Penny Manasco - September 18, 2017

Since the FDA and EMA released draft guidance on Risk Based Monitoring (RBM) and Electronic Source in 2011, Pharmas, Biotechs, and CROs have focused tremendous efforts on determining how these guidances affect monitoring activities onsite and remotely. 
 
Surprisingly, there have not been similar, significant efforts to determine how best to conduct data review.  In general, most monitors still review data using a page-by-page review of the eCRF—which is similar to the approach used in SDV:  review of each data set by visit (e.g., vital signs, then physical exam, then ECG, for visit 1, then a repeat for each subsequent visit). 
 
Unfortunately, this approach does not optimally identify data discrepancies because the data are not formatted in a way to identify data errors. Monitors review the data page by page, so errors or data discrepancies that occur across visits and across subjects at a site cannot be easily recognized.

A second challenge to monitoring critical data is the use of “standard” reports that fail to focus on study-specific data and processes. This results in the use of blunt, lagging indicators to identify study-specific high-risk sites and processes—to disappointing effect.
 
Finally, data review has been focused on internal consistency (i.e., if a data point is outside the expected result—query the result), but does not evaluate the reason(s) why the data were erroneous. 
 
While we commit tremendous efforts to determine how we will analyze trial data for efficacy, we, as an industry, need to spend the same level of effort on:
  1. How to best provide data for review and identify critical data findings specific to a research study or program.  There are many visualization tools that provide standard data visualizations, but they do not provide the study-specific insights truly needed.
  2. How to teach monitors and data managers to become detectives rather than box checkers.  They need to move from observing findings to determining why the events occurred and what methods are needed to fix them (a quality management approach).
 
In a recent presentation, we gave the participants a stack of CRF pages, similar to what monitors are expected to review. Team members had 5 minutes to review the pages and then 5 more minutes to review the data using our proprietary Subject Profile Analyzing Risk (SPAR). SPAR synthesizes data across data sets and across visits—specifically focused on the High Risk Data and Processes identified during Risk Assessment.  The team immediately identified the errors using the SPAR. They were unable to do so just using the eCRF pages. 
 
In a separate data example, we showed that critical data must be reviewed using many different tools and reports.  Critical rating data must be reviewed using at least 4 different reports—each providing a different aspect of quality:  who did the assessments (i.e., were they trained, given the appropriate delegation), when were the assessments done, did the assessments require additional tools (e.g., ePRO, photographs), and finally did the ratings make sense over time.  We were able to show that each aspect identified different errors important to the primary efficacy endpoint.
 
In that presentation, we also provided examples of non-essential trial activities that did not add to the quality of the trial data but added weeks of additional, non-productive work for the study and site teams.  These observations provide the basis for a comprehensive, efficient, study-specific, cost-effective approach to data review. One that can be implemented rapidly with a small well trained staff; saving significant costs while enhancing study quality.   
 
Defining what data are important to evaluate your scientific findings and how best to illustrate the data findings are essential steps toward successfully implementing RBM principles.  As an industry, we need to spend the same amount of time on data presentations for clinical operations as we do to identify, review, and analyze data for submission.

Training cannot be understated or classified as only necessary at study start-up. As an industry, we have trained our monitors to perform at the most basic levels of performance based on Bloom’s Taxonomy of Learning Domains (see below).  We need to move our monitors (just as we have moved children in school) to more advanced cognitive activities. Monitors need to be trained to move from Remembering/Understanding to Analyzing and Evaluating.   
 

 
Figure 1: Bloom’s Taxonomy of Cognitive Domains
Please join me in a FREE presentation:” Lessons Learned in Implementing RBM Data Review for Clinical Trials: What you see now may not be what you need to see.”
It demonstrates the power of data presentation and how MANA RBM’s custom, proprietary training methods can move study team members from Remembering to Evaluating data with the resulting enhanced data quality.
 
October 19, 2017
9:00-11:00 a.m. E.S.T.
Congressional Room
N.C. Biotech Center
15 TW Alexander Drive
Research Triangle Park, NC 27709-3547

Research Triangle Park is located between Raleigh, Durham, and Chapel Hill NC, minutes from the Raleigh Durham International Airport.
 
To Register for this free presentation (limited to 100 participants):
https://www.eventbrite.com/e/lessons-learned-in-rbm-implentation-data-review-for-clinical-trials-tickets-37859800683
 
 
Read More...

The One RBM Activity You Should Never Omit

Risk Assessment: The single most important activity to complete when implementing RBM

By Penny Manasco - July 19, 2017

Working in Remote Trial Management and Risked Based Monitoring (RBM), I used sponsor

insights and my many years of clinical development experience to determine what data were

important. That approach is now history. I HAVE SEEN THE LIGHT and it is RISK ASSESSMENT.
 

What??? Filling out a stupid form to identify study risks? How can that help my trial?
 

I just finished leading a Risk Assessment Activity for a study. The room was filled with many

different functional teams working on the study (Program Management, Data Management,

Monitoring, Medical/Safety, IP, Site representatives)—each with years of experience. Each confirmed the great value they received by participating in this effort.

 

Simply put, we all developed a common understanding of the areas of greatest risk for our

project and how each of us would contribute to managing those risks. Here are two concrete

examples as follows:


This study depends on subject diaries for key endpoints. We asked the all-important question:

What can go wrong? From that, we identified all the ways the diary collection could go wrong

and developed a data collection and an oversight plan to assure optimal oversight and to

eliminate loss of valuable subjects due to avoidable issues. Here is a snapshot of some of our

outcomes.
 

1. Risk: Subjects will not complete the diary daily as needed (Primary Efficacy).

a. We planned a test diary entry at the screening visit to assure the subjects could

access the diary application and enter data as needed on their device (DM

Responsibility).

b. We planned subject training to assure the subjects would accurately enter the

correct data in their diaries (Site Responsibility).

c. We worked as a team to determine how we can support the subjects with text

notifications if they were coming close to missing their visit window

(DM/Technology Responsibility).

d. We incorporated text notifications to the Study Coordinators to alert them to

subjects missing multiple diary days (DM/Technology Responsibility).

e. We planned reports to easily identify subjects ‘at risk’ for not completing diary

entry requirements (DM/Monitoring Responsibility)

f. We planned reports to identify subjects using prohibited medications(DM/Monitoring Responsibility).
 

2. Risk: The subject does not sign the informed consent prior to any assessments being

performed (Subject Protection).

a. Select an eSource system with eConsent linked and require direct data entry to

align with ALCOA (i.e., Attributable, Legible, Contemporaneous, Original,

Accurate) principles (DM/Technology/ Site responsibilities).

b. Design the eSource so the eConsent must be signed prior to release of any of

the eSource/EDC forms (DM/Technology Responsibilities).

c. Design the eSource to collect all other questions required to document correct

informed consent process (DM/Monitoring Responsibilities)

d. Design eConsent to check the Delegation of Authority form to assure the person

obtaining consent has the appropriate authority to perform the task

(DM/Monitoring Responsibilities).

e. Design the eSource system so that reconsent can be obtained if a new version of

the consent is released and approved prior to progressing with study

assessments (DM/Monitoring Responsibilities).
 

I hope this gives you some insight into why performing a Risk Assessment must be a cross

functional team activity and why it is absolutely critical to perform a Risk Assessment early

during trial planning, prior to completion of the protocol.
 

While adopting an entire quality approach for your trials, as envisioned in the new ICH GCP

Guidance that must be implemented starting July 2017, may seem daunting, you can begin by

at least performing a risk assessment. You will never be sorry.
 

To help you to get started, we offer several options:

• Free webinar on completing a Risk Assessment

July 26, 2017, Noon EDT

https://attendee.gotowebinar.com/register/7140655505300708099
 

• Attend a Live Training Program

August 24, 2017, 9:00-1:00 EDT, Raleigh, NC (more sites to be added)

https://attendee.gototraining.com/r/1173169395314409730

This hands-on program will equip users with the tools to conduct Risk Assessments for their

protocols. This session will include developing Risk Assessments and planning for downstream

oversight of risks

Cost is $750/person. Three people from the same organization: $2,000

Class size is limited to 20 participants with two instructors.
 

• Customized Risk Assessment for Study Team

MANA RBM will lead your project team in conducting a risk assessment for your

protocol. Please contact pmanasco@manarbm.com for further information on

this new service.

Read More...

Stop Clinical Research Agonization: Use RBM


By Penny Manasco - July 14, 2017

I recently read the following posts by two groups of research physicians. These investigators call CROs-- Contract Research Agonization and for good cause. We have all lost sight of what data are important in a clinical trial.
 
https://www.ashclinicalnews.org/perspectives/editors-corner/contract-research-agonizations/
 
https://www.ashclinicalnews.org/perspectives/advocating-return-common-sense-clinical-research/
 
I applaud their impassioned plea: “It is time to start questioning the necessity of some aspects of the clinical trials data-collection process, particularly those that cause daily hassles before, during, and after a clinical trial – sometimes years after publication”.
The FDA, EMA, and ICH have told us, for over 5 years, as part of their Guidance documents on Risked Based Monitoring (RBM) and GCP, that the clinical research industry’s definition of quality (i.e., all check boxes are checked, every form “SDV’ed, and every query is answered) is not adequate trial oversight.  The Regulators implored our industry to focus on critical data and employ rapid central review to identify and correct errors quickly to enhance study conduct and subject safety.
The challenge for the Clinical Research industry is to understand what data are important and what do the errors we identify tell us?  Quality methods are based on investigating why errors occurred, not just that they did.  You must first understand why the errors occurred to help you provide the best solution to correct them.
 
Some of the Burning Questions you should ask are as follows: 
  • Do the data make sense? 
  • Do we see signals we need to investigate (e.g., What do the deviations tell us?)?  The actual number of deviations is not as important as the qualitative data on what the deviations are and how they are distributed across sites and users.  
  • What were the query topics? 
    • Did queries indicate an issue with the design of the data collector, training, or an error in generating a query?
    • Should we query an issue not important in data analysis just because we can?
Deciding what not to do can be a very scary proposition for Sponsors and CROs.  Clinical development teams must become involved to help Clinical Operations identify the really important areas to monitor in the clinical trial and what categories of non-critical data are OK to monitor less frequently or not at all.  Clearly, assuring all aspects of primary efficacy and safety assessments are paramount. 
Sponsors always want to know how to save money in trials: Here’s a simple way that they often don’t want to hear: identify what you don’t need to do.  You will free time for the sites to focus on what is important and save money. 
To enhance your relationship with your sites, design straightforward trials focusing on what is important.  Do not make busy work for the sites based on the historical mantra, “We’ve always done it this way.”. 
We did a pivotal Phase III RBM trial with a tiny staff and a Sponsor that knew exactly what she wanted and how to support and focus our efforts.  She had done many trials with traditional CROs that had frequent onsite visits—but they still missed important endpoint reviews and an entire site was lost due to errors in the ratings, her primary endpoint.  Our study was done for a fraction of the traditional, expected cost because we used tools to enable immediate data review with a laser focus on what was important.  80% of the sites said they would use our paperless approach again—I think that says it all. 
Want to get started on RBM and implementing the new GCP guidance? Join our free webinar on conducting a Risk Assessment—that’s the first step!  https://attendee.gotowebinar.com/register/7140655505300708099
 
 
Read More...