The Key to RBM and my New Year's/Decade's Resolution: Ask the Right Question

A conversation with Physician Scientist-Mary Flack M.D.

By Penny Manasco - January 6, 2020

Eight years ago, I had my first real experience working with a Sponsor that exemplified the principles of Risk Based Quality Oversight in her study implementation expectations. That person was Dr. Mary Flack, an NIH-trained physician-scientist and VP of Clinical Research, at NanoBio.  

Dr. Flack had had enough of high priced CROs that performed SDV and missed critical systematic errors (i.e., incomplete dosing at sites and one site that performed its primary efficacy endpoint analysis incorrectly, resulting in that site (a high enrolling site) being excluded from the primary endpoint analysis).  She needed a different approach to determine whether her investigational product worked or not.

Ask the Right Questions! Dr.  Flack was very clear about her High-Risk areas—activities that, if not performed correctly, would sink her trial or give equivocal results—requiring additional multi-million-dollar trials.

The critical questions she asked and the answers MANA RBM provided were as follows:
  1. Q: How can I know if an error occurs as soon as it happens, not months later?  
                   A: Use eSource to review the data in near real time. 
                       Use analytic tools to rapidly identify when an error occurs.  
 
  1. Q: How can I know when a site rater is not following the natural history of the disease in their rating scales in time to effect a change?
A: Change the way data  is reviewed (no need for SDV due to  eSource) so evaluation focuses on the critical errors that must be identified.  The methods to do so are as follows:
  • Ensure all raters were trained.  
  • Review rating scales for each subject AT LEAST weekly to identify errors as they occurred.  This included checking the pattern of data errors, not just whether data were entered. 
  • Identify where there were systematic differences in performance by investigators (e.g., differences in rating of spontaneous resolution of symptoms across investigators at each site).
  1. Q: How can I know when a subject is not getting the recommended dose of the study medication as soon as it occurs?
A: Add more training, data collection, and oversight around study drug administration and dosing.  Rapidly identify errors and correct those errors precluding errors from becoming systematic .  

Dr. Flack clearly enunciated the critical issues needed for successful implementation.  MANA RBM developed an oversight approach that met her needs. The results, including a 50% drop in deviations from previous trials run by traditional CROs, exceeded her expectations.

Now fast forward nearly a decade.  I recently asked her about that trial and what she saw as the future of Risk-Based Monitoring.

“I think we have to start asking the right questions to drive effective clinical trial oversight. I trained as a scientist and while at the NIH, and afterwards, I ask the questions a scientist would want to know. You could call it Science Driven Oversight or SDO, since we have to have an acronym.” 

She further expounded on her thoughts.  “I think much of our challenges with clinical trial oversight stem from the questions we ask. Just as we did nearly a decade ago, I want to know what activities in a trial will result in my ability to successfully complete the protocol as designed.  I want to know what errors occur that can result in censoring subjects, incomplete understanding of whether the subject received the correct dose, and all activities related to the primary endpoints—not just were the values collected, but did they follow the required processes as defined in the protocol.  And I want to know if the errors we see are systematic—a one off error is one thing, but a systematic error that is not identified indicates that we didn’t ask the right question.”

Science Driven Oversight, as Dr. Flack described it, is asking the questions that will affect the outcome/analysis of the trial and subject safety first and foremost.  “You can see that if your question is: “Were all pages SDV’ed or reviewed and  were all queries closed” will result in a very different outcome for oversight than asking “Were the processes for collecting the primary endpoint followed and how do you know? or Were there systematic errors identified at any site that can affect our ability to determine the safety of our investigational products and how do you know?”

Finally, Dr. Flack opined on SDO’s orthogonal view of clinical trial oversight, “When we focus our questions and oversight on the most important areas:  Did X, affecting primary efficacy endpoint, occur?, or Did Y, affecting subject safety, occur?, or did Z occur multiple times?, we can be sure to find the high impact, low frequency errors and systematic errors that affect trial success.”

Using a scientific construct to determine the critical questions we ask for clinical trial oversight is really what the FDA envisions as Risk Based Monitoring.  We ask questions to determine whether errors occurred in the  areas of highest risk for successful trial integrity.  

So, for the new decade, I adopt and encourage everyone working in clinical research to adopt Dr. Flack’s mantra of Science Driven Oversight.  I will think like a scientist, ask the questions a scientist would ask, and analyze study operations with the same rigor used in discovering innovative new medicines.    
 
Read More...

The Risks of KRI in RBM: A Response to “A Roadmap for Implementing Risk-Based Monitoring and Quality Management"


By Penny Manasco - June 11, 2019

There are a myriad of articles on RBM implementation. Some authors have us jumping immediately into tool selection before we have even determined WHAT we need to measure. It is essential that we identify Errors that Matter for each protocol and how we will identify whether systematic errors occur. Hopefully this paper will encourage you to step back and define the important errors that need to be monitored. Only then can you determine whether the tool you choose will meet your needs. I'm also encouraging all companies to systematically evaluate different RBM approaches, just as we evaluate different therapies--moving toward Evidence-Based Clinical Research and Quality Oversight.

http://www.appliedclinicaltrialsonline.com/risks-kri-rbm-response-roadmap-implementing-risk-based-monitoring-and-quality-management
Read More...

Risks of Key Risk Indicators

Be sure you know what the data are and aren't telling you

By Penelope Manasco - April 18, 2019

Our team and collaborators continually evaluate the myriad Risked Based Monitoring (RBM) methods to better understand their strengths and weaknesses.  

Key Risk Indicators (KRIs) are often touted as an excellent approach to conduct RBM.  While many Sponsors and CROs have adopted them, major issues have surfaced concerning KRIs ability to identify important areas of risk that can critically affect the integrity of the study.

For many, using KRIs is considered “a good first step”.  It’s an improvement when compared with no overall trend analysis.  

It’s important to note, however, recent data we identified brings those ideas into question.  Here’s why:
  1. Non-specific KRIs such as Deviation Counts or even Deviation Rates (corrected for the number of subjects randomized) may miss critical, low incidence deviations.  If all deviations are considered together, the critical, low incidence deviations may be completely masked by the higher rates of all deviations.  Therefore, a critical finding that could affect the integrity of the data is completely missed due to the different incidence rates.  

Example: One site may miss the correct time frame for measuring RECIST, a critical efficacy measure used in solid tumor oncology trials.  Efficacy measures are often time to progression. If the imaging is not taken at the correct interval, the time to progression cannot be accurately interpreted. This deviation can result in loss of integrity for all the efficacy data at that study site and put a subject at undue risk (because their data cannot be used for the trial endpoint).  This could be a low incidence deviation when compared to all other deviations for the trial, but it still represents a critical quality finding. 
  1. KRIs often miss study specific process errors that can affect trial quality, particularly if they occur across different databases or require the use of audit trails. 

Example:  Many studies (e.g., CNS studies, dermatology studies) require the use of assessment tools administered by a trained investigator.  KRIs are not designed to specifically evaluate whether the primary assessment was conducted by the correct person and that the person was appropriately trained.  These data to conduct this assessment are usually in separate databases and may include the audit trail.  Standard KRIs will miss this critical protocol-specific process error.  Missing this critical finding may result in censoring multiple subjects where the endpoint was not correctly completed and documented.  
  1. Most KRIs require significant data to accrue before a KRI is increased to a higher risk level.  This means that the event of concern may be repeated numerous times before the signal (risk level increase) is sufficient to identify the increased risk.   The corollary to this is that KRIs are also not sensitive to correction of errors. Once a KRI has been elevated to high risk, it is difficult or impossible to identify when a correction has been made.
  1. KRIs are dependent on the stage of each subject in the study.  Often adverse event rates are used as a quality KRI but comparisons across sites are dependent on the phase of the study for each site.  

Example: If subjects from one site have already received study drug treatment, they should not be compared to another site whose subjects have not yet received treatment.  A corollary to this would be comparing two sites, one whose subjects have been treated for nearly a year with a site whose subjects are newly treated.  Rates of adverse events will likely be different based on timing alone. 
  1.  KRIs vary based on the underlying analytic and data structure.   Simple data summaries often assume that the data have an underlying normal distribution but without direct examination of this assumption (e.g., via scatter plots or distribution plots), overall conclusions may be incorrect and important discrepancies may be missed.
Example:  If you measure counts of an event, you need to know whether the counts                    have been corrected for the number of subjects and what correction was applied. For instance, the subject correction factor for screen failures is different than that of early terminations.  Standardizing the data based on the number of subjects, however, may artificially increase the rate at sites with small numbers of subjects.  Another approach, using Z scores (corrected for the distribution of the data across the sites) may provide a better understanding of true outliers, but the caveats above still hold.  


We have found that using protocol-specific analytics to immediately identify when critical data and process errors (i.e., errors that matter) occur is more efficient and effective than more non-specific methods such as KRIs and statistical outlier identification.  Protocol specific errors that matter analysis does not depend on meeting an arbitrary number of events.  Nor does it require extensive research to find the critical events that is required with KRIs.

With our new patent-pending, first to market clinical quality management software, REACHERTM the process is fast and easy, requiring minimal Sponsor resources. Protocol-specific errors that matter, issues that matter, and safety triggers are analyzed daily with critical findings arriving in your email when they occur.

Please let me know your experiences and thoughts about using KRI’S and if you are interested in Risk Based Monitoring using more sensitive and specific methods.  
 
Read More...

RBM and eCOA/ePRO: Pearls from recent conference


By Penelope Manasco M.D. - November 12, 2018

I was asked to speak at a recent ePRO/eCOA conference on the topic of Identifying and Monitoring Risks with ePRO/eCOA implementation.  Much of the conference focused on “Big Pharma’s” implementation issues, which are usually not relevant to small companies.  I did, however, learn a few interesting “pearls” relevant to all companies and listed them below.

Jonathan Helfgott, a tremendous speaker as usual, is the former Associate Director for Risk Science at the FDA (now at Hopkins and Stage 2 Innovations).  He played an instrumental role in drafting the eSource and Risk Based Monitoring Guidance documents (among his many contributions). 

Here were some of his talking points:
  • Clearly articulate endpoints: never lose sight about what is important: people and processes are important.  Penelope’s pearls: That is why trial oversight needs to include an analysis of processes, not just analysis of the data for submission.
  • Provide agency with a data flow diagram;  always be transparent.  
  • QC all primary endpoint and safety data that has been changed.  That is a red flag for the agency.  Penelope’s pearls: That is why process review, including audit trail review is an important component of trial oversight. Note audit trail requirements are not recommendations, but actual requirements as stated in 21CFR Part 11.
  • The Chain of Custody of data is important.  Auditing who has access for critical data and when staff who leave the study have access to technology systems removed is critical to assure data integrity.
  • eSource is not just a FDA recommendation; the FDA promotes it.  Penelope’s pearls: in a recent survey I conducted to determine the Barriers to RBM adoption, 25% fewer organizations had adopted eSource compared to RBM adoption, yet eSource significantly enhances Risk-Based Monitoring.
  • Pitfalls to avoid in trial conduct and oversight:
    • Treating all data equally; Focus should be on critical data and processes, which should be specified up front.
    • Quality data is defined as the absence of errors that matter, according to Janet Woodcock, M.D. (Director of CDER/FDA) .  Mr. Helfgott used this to emphasize the importance of focusing on the critical data and processes.
    • Not appreciating the nature of data errors.  It is important  to recognize problems that affect multiple sites (i.e., systematic errors) and fix them in real time.
    • Ineffective methods for detecting trends 
    • Avoid using equipment and instruments prone to malfunction
    • Poorly written protocols (with poorly defined endpoints) and eCRFs
 
Finally, I asked Mr. Helfgott how Sponsors and CROs using Risk Based Monitoring can better prepare for audits.  He worked on the algorithm used for auditing sites and he provided these recommendations concerning preparing for audits. 
  • Share your oversight plans with the FDA; even if they don’t respond, you can document you provided the info to the FDA.  If findings are identified as part of an FDA audit and you have submitted the monitoring approach that you follow, the agency is unlikely to issue a warning letter based on audit findings.  


Our Risk Based Monitoring Method (MANA Method) is focused completely on identifying and correcting errors that matter.  We also identify and correct systematic errors.  You can’t expect to find these important errors without specifically designing your oversight approach to do so.  Every Sponsor should be asking:  “Tell me specifically what the errors that matter you are looking for and how will you find them using your current monitoring approach”.  If you are using SDV as your oversight approach, you cannot identify the errors that matter.

If you don’t like the answer you get or are using SDV as your oversight for a critical trial, don’t hesitate to contact MANA RBM.  We offer two approaches to executive oversight to identify errors that matter.  Pmanasco@manarbm.com

 
Read More...

RBM: The Ugly Truth About Clinical Trial Oversight Effectiveness


By Penny Manasco - October 3, 2018


Recently, we published the first of its kind, head to head comparison of our RBM methodology (i.e., the MANA Method) versus traditional onsite SDV.  This pilot study showed the MANA Method was proven superior to SDV. (http://www.appliedclinicaltrialsonline.com/comparing-risk-based-monitoring-and-remote-trial-management-vs-sdv?pageID=1)
In researching the literature to find similar pilot studies to include in the manuscript, the paucity of data comparing the effectiveness of different clinical trial oversight methods was striking.  

Clinical trial management and oversight is a multi-billion-dollar industry with double-digit growth projected to continue for the foreseeable future.  The activities used to conduct oversight have been the focus of regulatory scrutiny since 2013, when the FDA and EMA released guidance documents recommending a different process and focus for trial oversight.  The International Council of Harmonization followed by releasing its new guidance for Good Clinical Practice, which was adopted in 2017 by the EMA and in 2018 by the FDA.   


The guidance recommended focusing oversight on the “errors that matter” (i.e., errors that affect trial integrity, data quality, study participant safety, investigational product management, and human subject protection) rather than the myriad of small, inconsequential errors that are usually identified, queried, and queried again an absurd number of times resulting in “Clinical Research Agonization” for Research Sites (https://www.ashclinicalnews.org/perspectives/editors-corner/contract-research-agonizations/).

In a recent survey of RBM Barriers to Adoption, that I developed and conducted as part of my involvement to work on an expert panel advising the FDA on RBM, the findings identified a myriad of “RBM” implementation approaches, each with different ability to detect “errors that matter”.  

Why are these findings critically important?  Because using RBM methods not effective in finding “errors that matter” puts research participants at risk.  Over 100,000 research participants per year participate in clinical trials (NIH 2015). Of those, a large percentage of the participants have a disease and are enrolled to test the effectiveness of unproven treatments.  If our oversight methods are ineffective, critical errors that affect study participant safety or the interpretation of the effectiveness of medicines, and the risk: benefit new treatments can be impacted.

Using ineffective oversight that misses “errors that matter” produces a myriad of negative consequences. One only needs to look at Warning letters to understand its effects (e.g., J&J, ICON, and ceftiboprole which resulted in the FDA not approving its use in the U.S.).

Additionally, the findings in the large number of Complete Response Letters (CRLs) issued demonstrate the negative impact on subjects, development programs, and companies.  The FDA issued 33 CRL letters between January 2017 and May 2018.  Of the 33 CRLs, 80% were for small companies and 33% included clinical issues.  Missing ‘errors that matter’ can result in the loss of a product and the loss of a company’s value and viability, in addition to unnecessarily putting research participants at undue risk.

We need to scientifically evaluate different oversight approaches to identify the most effective oversight approaches just as we evaluate effective  treatments.  The unmet, medical need for this type of research is greater than any other because it affects all scientific findings we use in evidence-based medicine.   

We give a “shout out” to PaxVax, who worked with MANA RBM to evaluate their traditional onsite SDV compared to MANA RBM’s MANA Method of RBM and Remote Trial Management.  It is only when forward thinking companies, such as PaxVax, encourage and promote this research that we improve our clinical trial processes.

After you read the paper (see the link inserted above), let me know your thoughts.  
 
Read More...