Lessons Learned in Implementing RBM: Data Review for Clinical Trials- What you see now may not be what you need to see.

By Penny Manasco - September 18, 2017

Since the FDA and EMA released draft guidance on Risk Based Monitoring (RBM) and Electronic Source in 2011, Pharmas, Biotechs, and CROs have focused tremendous efforts on determining how these guidances affect monitoring activities onsite and remotely. 
Surprisingly, there have not been similar, significant efforts to determine how best to conduct data review.  In general, most monitors still review data using a page-by-page review of the eCRF—which is similar to the approach used in SDV:  review of each data set by visit (e.g., vital signs, then physical exam, then ECG, for visit 1, then a repeat for each subsequent visit). 
Unfortunately, this approach does not optimally identify data discrepancies because the data are not formatted in a way to identify data errors. Monitors review the data page by page, so errors or data discrepancies that occur across visits and across subjects at a site cannot be easily recognized.

A second challenge to monitoring critical data is the use of “standard” reports that fail to focus on study-specific data and processes. This results in the use of blunt, lagging indicators to identify study-specific high-risk sites and processes—to disappointing effect.
Finally, data review has been focused on internal consistency (i.e., if a data point is outside the expected result—query the result), but does not evaluate the reason(s) why the data were erroneous. 
While we commit tremendous efforts to determine how we will analyze trial data for efficacy, we, as an industry, need to spend the same level of effort on:
  1. How to best provide data for review and identify critical data findings specific to a research study or program.  There are many visualization tools that provide standard data visualizations, but they do not provide the study-specific insights truly needed.
  2. How to teach monitors and data managers to become detectives rather than box checkers.  They need to move from observing findings to determining why the events occurred and what methods are needed to fix them (a quality management approach).
In a recent presentation, we gave the participants a stack of CRF pages, similar to what monitors are expected to review. Team members had 5 minutes to review the pages and then 5 more minutes to review the data using our proprietary Subject Profile Analyzing Risk (SPAR). SPAR synthesizes data across data sets and across visits—specifically focused on the High Risk Data and Processes identified during Risk Assessment.  The team immediately identified the errors using the SPAR. They were unable to do so just using the eCRF pages. 
In a separate data example, we showed that critical data must be reviewed using many different tools and reports.  Critical rating data must be reviewed using at least 4 different reports—each providing a different aspect of quality:  who did the assessments (i.e., were they trained, given the appropriate delegation), when were the assessments done, did the assessments require additional tools (e.g., ePRO, photographs), and finally did the ratings make sense over time.  We were able to show that each aspect identified different errors important to the primary efficacy endpoint.
In that presentation, we also provided examples of non-essential trial activities that did not add to the quality of the trial data but added weeks of additional, non-productive work for the study and site teams.  These observations provide the basis for a comprehensive, efficient, study-specific, cost-effective approach to data review. One that can be implemented rapidly with a small well trained staff; saving significant costs while enhancing study quality.   
Defining what data are important to evaluate your scientific findings and how best to illustrate the data findings are essential steps toward successfully implementing RBM principles.  As an industry, we need to spend the same amount of time on data presentations for clinical operations as we do to identify, review, and analyze data for submission.

Training cannot be understated or classified as only necessary at study start-up. As an industry, we have trained our monitors to perform at the most basic levels of performance based on Bloom’s Taxonomy of Learning Domains (see below).  We need to move our monitors (just as we have moved children in school) to more advanced cognitive activities. Monitors need to be trained to move from Remembering/Understanding to Analyzing and Evaluating.   

Figure 1: Bloom’s Taxonomy of Cognitive Domains
Please join me in a FREE presentation:” Lessons Learned in Implementing RBM Data Review for Clinical Trials: What you see now may not be what you need to see.”
It demonstrates the power of data presentation and how MANA RBM’s custom, proprietary training methods can move study team members from Remembering to Evaluating data with the resulting enhanced data quality.
October 19, 2017
9:00-11:00 a.m. E.S.T.
Congressional Room
N.C. Biotech Center
15 TW Alexander Drive
Research Triangle Park, NC 27709-3547

Research Triangle Park is located between Raleigh, Durham, and Chapel Hill NC, minutes from the Raleigh Durham International Airport.
To Register for this free presentation (limited to 100 participants):

The One RBM Activity You Should Never Omit

Risk Assessment: The single most important activity to complete when implementing RBM

By Penny Manasco - July 19, 2017

Working in Remote Trial Management and Risked Based Monitoring (RBM), I used sponsor

insights and my many years of clinical development experience to determine what data were

important. That approach is now history. I HAVE SEEN THE LIGHT and it is RISK ASSESSMENT.

What??? Filling out a stupid form to identify study risks? How can that help my trial?

I just finished leading a Risk Assessment Activity for a study. The room was filled with many

different functional teams working on the study (Program Management, Data Management,

Monitoring, Medical/Safety, IP, Site representatives)—each with years of experience. Each confirmed the great value they received by participating in this effort.


Simply put, we all developed a common understanding of the areas of greatest risk for our

project and how each of us would contribute to managing those risks. Here are two concrete

examples as follows:

This study depends on subject diaries for key endpoints. We asked the all-important question:

What can go wrong? From that, we identified all the ways the diary collection could go wrong

and developed a data collection and an oversight plan to assure optimal oversight and to

eliminate loss of valuable subjects due to avoidable issues. Here is a snapshot of some of our


1. Risk: Subjects will not complete the diary daily as needed (Primary Efficacy).

a. We planned a test diary entry at the screening visit to assure the subjects could

access the diary application and enter data as needed on their device (DM


b. We planned subject training to assure the subjects would accurately enter the

correct data in their diaries (Site Responsibility).

c. We worked as a team to determine how we can support the subjects with text

notifications if they were coming close to missing their visit window

(DM/Technology Responsibility).

d. We incorporated text notifications to the Study Coordinators to alert them to

subjects missing multiple diary days (DM/Technology Responsibility).

e. We planned reports to easily identify subjects ‘at risk’ for not completing diary

entry requirements (DM/Monitoring Responsibility)

f. We planned reports to identify subjects using prohibited medications(DM/Monitoring Responsibility).

2. Risk: The subject does not sign the informed consent prior to any assessments being

performed (Subject Protection).

a. Select an eSource system with eConsent linked and require direct data entry to

align with ALCOA (i.e., Attributable, Legible, Contemporaneous, Original,

Accurate) principles (DM/Technology/ Site responsibilities).

b. Design the eSource so the eConsent must be signed prior to release of any of

the eSource/EDC forms (DM/Technology Responsibilities).

c. Design the eSource to collect all other questions required to document correct

informed consent process (DM/Monitoring Responsibilities)

d. Design eConsent to check the Delegation of Authority form to assure the person

obtaining consent has the appropriate authority to perform the task

(DM/Monitoring Responsibilities).

e. Design the eSource system so that reconsent can be obtained if a new version of

the consent is released and approved prior to progressing with study

assessments (DM/Monitoring Responsibilities).

I hope this gives you some insight into why performing a Risk Assessment must be a cross

functional team activity and why it is absolutely critical to perform a Risk Assessment early

during trial planning, prior to completion of the protocol.

While adopting an entire quality approach for your trials, as envisioned in the new ICH GCP

Guidance that must be implemented starting July 2017, may seem daunting, you can begin by

at least performing a risk assessment. You will never be sorry.

To help you to get started, we offer several options:

• Free webinar on completing a Risk Assessment

July 26, 2017, Noon EDT


• Attend a Live Training Program

August 24, 2017, 9:00-1:00 EDT, Raleigh, NC (more sites to be added)


This hands-on program will equip users with the tools to conduct Risk Assessments for their

protocols. This session will include developing Risk Assessments and planning for downstream

oversight of risks

Cost is $750/person. Three people from the same organization: $2,000

Class size is limited to 20 participants with two instructors.

• Customized Risk Assessment for Study Team

MANA RBM will lead your project team in conducting a risk assessment for your

protocol. Please contact pmanasco@manarbm.com for further information on

this new service.


Stop Clinical Research Agonization: Use RBM

By Penny Manasco - July 14, 2017

I recently read the following posts by two groups of research physicians. These investigators call CROs-- Contract Research Agonization and for good cause. We have all lost sight of what data are important in a clinical trial.
I applaud their impassioned plea: “It is time to start questioning the necessity of some aspects of the clinical trials data-collection process, particularly those that cause daily hassles before, during, and after a clinical trial – sometimes years after publication”.
The FDA, EMA, and ICH have told us, for over 5 years, as part of their Guidance documents on Risked Based Monitoring (RBM) and GCP, that the clinical research industry’s definition of quality (i.e., all check boxes are checked, every form “SDV’ed, and every query is answered) is not adequate trial oversight.  The Regulators implored our industry to focus on critical data and employ rapid central review to identify and correct errors quickly to enhance study conduct and subject safety.
The challenge for the Clinical Research industry is to understand what data are important and what do the errors we identify tell us?  Quality methods are based on investigating why errors occurred, not just that they did.  You must first understand why the errors occurred to help you provide the best solution to correct them.
Some of the Burning Questions you should ask are as follows: 
  • Do the data make sense? 
  • Do we see signals we need to investigate (e.g., What do the deviations tell us?)?  The actual number of deviations is not as important as the qualitative data on what the deviations are and how they are distributed across sites and users.  
  • What were the query topics? 
    • Did queries indicate an issue with the design of the data collector, training, or an error in generating a query?
    • Should we query an issue not important in data analysis just because we can?
Deciding what not to do can be a very scary proposition for Sponsors and CROs.  Clinical development teams must become involved to help Clinical Operations identify the really important areas to monitor in the clinical trial and what categories of non-critical data are OK to monitor less frequently or not at all.  Clearly, assuring all aspects of primary efficacy and safety assessments are paramount. 
Sponsors always want to know how to save money in trials: Here’s a simple way that they often don’t want to hear: identify what you don’t need to do.  You will free time for the sites to focus on what is important and save money. 
To enhance your relationship with your sites, design straightforward trials focusing on what is important.  Do not make busy work for the sites based on the historical mantra, “We’ve always done it this way.”. 
We did a pivotal Phase III RBM trial with a tiny staff and a Sponsor that knew exactly what she wanted and how to support and focus our efforts.  She had done many trials with traditional CROs that had frequent onsite visits—but they still missed important endpoint reviews and an entire site was lost due to errors in the ratings, her primary endpoint.  Our study was done for a fraction of the traditional, expected cost because we used tools to enable immediate data review with a laser focus on what was important.  80% of the sites said they would use our paperless approach again—I think that says it all. 
Want to get started on RBM and implementing the new GCP guidance? Join our free webinar on conducting a Risk Assessment—that’s the first step!  https://attendee.gotowebinar.com/register/7140655505300708099

Marching to the New Beat of RBM and ICH E6(R2): Data Management Leads the Band

A webinar presented for the Society of Clinical Data Managers on June 28, 2017

By Penny Manasco - June 19, 2017

The more MANA RBM implements risk based monitoring (RBM) and ICH E6(R2), the more we appreciate the importance of data management (DM) in the process.  I was pleased when the Society of Clinical Data Management (SCDM) invited me to speak on the topic of Data Management’s role in adopting the new ICH E6(R2) Good Clinical Practice Guidelines approved in 2016.  This live webinar will be on June 28 at 9:00 EST and 12:00 EST.  The cost is $420 for non-members and $360 for SCDM members.  For webinar details, here is the link: http://portal.scdm.org/node/1035 .  The registration link is  https://www.mci-registrations.com/scdm/login-refresh/courses.aspx?grp=true&course=WEBJUN2017
Why is this topic important to everyone?
Data Managers will be key drivers in selecting systems to meet the new needs for contemporaneous data entry, rapid review of critical data (i.e., safety, efficacy, and protocol compliance), and data analytics for more robust, integrated data review at the subject level and across sites. 
Data Managers will need the creativity and knowledge to design systems that go far beyond simply collecting data for analysis.  MANA RBM already designs the newer, robust systems to perform electronic informed consent, manage training, facilitate remote review of protocol compliance, and even do robust checks to confirm that the correct users are performing the correct assessments at the correct times and that the users have the right training and delegation to perform those activities!
Data management review should occur immediately after the subject was seen and the data entered.  There is no need to have DM wait until monitors review a subject when the data are entered directly into electronic systems (i.e., eSource). 
DM insights are critical to central review.  Instead of worrying about how many queries were closed, data managers will gain further insight into what queries have been raised and why, identify trends at sites for correctable errors, and help design systems to enhance the site’s successful participation in the trial. 
Data analytics is another key area where Data Managers can support the successful adoption of the new Good Clinical Practice Guidelines.  Here, study specific reporting to enable efficient review of critical data and processes identified during Risk Assessment is crucial to success. 

I hope you can attend this webinar.  We also invite all your colleagues or others interested in this topic to attend.
Stay tuned to our website and blogs too for some exciting RBM news.  In the works are new methods and systems that will significantly change how you conduct clinical trials and deliver cost savings you can see and take to the bank.

New ICH GCP Guidelines Implementation (including RBM and quality management): What do you need to know from your CRO?

By Penny Manasco - February 22, 2017

Late last year the ICH finalized its new guidelines for GCP.  These guidelines differed significantly from previous versions and should affect how you evaluate CROs’ performance in these areas.
To help Sponsors better evaluate compliance with new GCP requirements (expected to be implemented in Mid 2017), Applied Clinical Trials recently published a paper, Does your CRO comply with ICH E6?, authored by Penelope Manasco, M.D., CEO of MANA RBM. It identifies critical questions for you to ask CROs to better evaluate whether and/or the degree the CRO complies with the new changes in GCP. 
This paper also provides the expected responses from the CROs to help Sponsors evaluate the answer provided versus the answer needed. If you want to learn more about evaluating CROs for complying with the new, ICH GCP Guidelines, please register for our webinar:  Evaluating CROs for Compliance with new GCP Guidelines on Mar 3, 2017 2:00 PM EST at: 
After registering, you will receive a confirmation email containing information about joining the webinar.