Risks of Key Risk Indicators

Be sure you know what the data are and aren't telling you

By Penelope Manasco - April 18, 2019

Our team and collaborators continually evaluate the myriad Risked Based Monitoring (RBM) methods to better understand their strengths and weaknesses.  

Key Risk Indicators (KRIs) are often touted as an excellent approach to conduct RBM.  While many Sponsors and CROs have adopted them, major issues have surfaced concerning KRIs ability to identify important areas of risk that can critically affect the integrity of the study.

For many, using KRIs is considered “a good first step”.  It’s an improvement when compared with no overall trend analysis.  

It’s important to note, however, recent data we identified brings those ideas into question.  Here’s why:
  1. Non-specific KRIs such as Deviation Counts or even Deviation Rates (corrected for the number of subjects randomized) may miss critical, low incidence deviations.  If all deviations are considered together, the critical, low incidence deviations may be completely masked by the higher rates of all deviations.  Therefore, a critical finding that could affect the integrity of the data is completely missed due to the different incidence rates.  

Example: One site may miss the correct time frame for measuring RECIST, a critical efficacy measure used in solid tumor oncology trials.  Efficacy measures are often time to progression. If the imaging is not taken at the correct interval, the time to progression cannot be accurately interpreted. This deviation can result in loss of integrity for all the efficacy data at that study site and put a subject at undue risk (because their data cannot be used for the trial endpoint).  This could be a low incidence deviation when compared to all other deviations for the trial, but it still represents a critical quality finding. 
  1. KRIs often miss study specific process errors that can affect trial quality, particularly if they occur across different databases or require the use of audit trails. 

Example:  Many studies (e.g., CNS studies, dermatology studies) require the use of assessment tools administered by a trained investigator.  KRIs are not designed to specifically evaluate whether the primary assessment was conducted by the correct person and that the person was appropriately trained.  These data to conduct this assessment are usually in separate databases and may include the audit trail.  Standard KRIs will miss this critical protocol-specific process error.  Missing this critical finding may result in censoring multiple subjects where the endpoint was not correctly completed and documented.  
  1. Most KRIs require significant data to accrue before a KRI is increased to a higher risk level.  This means that the event of concern may be repeated numerous times before the signal (risk level increase) is sufficient to identify the increased risk.   The corollary to this is that KRIs are also not sensitive to correction of errors. Once a KRI has been elevated to high risk, it is difficult or impossible to identify when a correction has been made.
  1. KRIs are dependent on the stage of each subject in the study.  Often adverse event rates are used as a quality KRI but comparisons across sites are dependent on the phase of the study for each site.  

Example: If subjects from one site have already received study drug treatment, they should not be compared to another site whose subjects have not yet received treatment.  A corollary to this would be comparing two sites, one whose subjects have been treated for nearly a year with a site whose subjects are newly treated.  Rates of adverse events will likely be different based on timing alone. 
  1.  KRIs vary based on the underlying analytic and data structure.   Simple data summaries often assume that the data have an underlying normal distribution but without direct examination of this assumption (e.g., via scatter plots or distribution plots), overall conclusions may be incorrect and important discrepancies may be missed.
Example:  If you measure counts of an event, you need to know whether the counts                    have been corrected for the number of subjects and what correction was applied. For instance, the subject correction factor for screen failures is different than that of early terminations.  Standardizing the data based on the number of subjects, however, may artificially increase the rate at sites with small numbers of subjects.  Another approach, using Z scores (corrected for the distribution of the data across the sites) may provide a better understanding of true outliers, but the caveats above still hold.  


We have found that using protocol-specific analytics to immediately identify when critical data and process errors (i.e., errors that matter) occur is more efficient and effective than more non-specific methods such as KRIs and statistical outlier identification.  Protocol specific errors that matter analysis does not depend on meeting an arbitrary number of events.  Nor does it require extensive research to find the critical events that is required with KRIs.

With our new patent-pending, first to market clinical quality management software, REACHERTM the process is fast and easy, requiring minimal Sponsor resources. Protocol-specific errors that matter, issues that matter, and safety triggers are analyzed daily with critical findings arriving in your email when they occur.

Please let me know your experiences and thoughts about using KRI’S and if you are interested in Risk Based Monitoring using more sensitive and specific methods.  
 
Read More...

RBM and eCOA/ePRO: Pearls from recent conference


By Penelope Manasco M.D. - November 12, 2018

I was asked to speak at a recent ePRO/eCOA conference on the topic of Identifying and Monitoring Risks with ePRO/eCOA implementation.  Much of the conference focused on “Big Pharma’s” implementation issues, which are usually not relevant to small companies.  I did, however, learn a few interesting “pearls” relevant to all companies and listed them below.

Jonathan Helfgott, a tremendous speaker as usual, is the former Associate Director for Risk Science at the FDA (now at Hopkins and Stage 2 Innovations).  He played an instrumental role in drafting the eSource and Risk Based Monitoring Guidance documents (among his many contributions). 

Here were some of his talking points:
  • Clearly articulate endpoints: never lose sight about what is important: people and processes are important.  Penelope’s pearls: That is why trial oversight needs to include an analysis of processes, not just analysis of the data for submission.
  • Provide agency with a data flow diagram;  always be transparent.  
  • QC all primary endpoint and safety data that has been changed.  That is a red flag for the agency.  Penelope’s pearls: That is why process review, including audit trail review is an important component of trial oversight. Note audit trail requirements are not recommendations, but actual requirements as stated in 21CFR Part 11.
  • The Chain of Custody of data is important.  Auditing who has access for critical data and when staff who leave the study have access to technology systems removed is critical to assure data integrity.
  • eSource is not just a FDA recommendation; the FDA promotes it.  Penelope’s pearls: in a recent survey I conducted to determine the Barriers to RBM adoption, 25% fewer organizations had adopted eSource compared to RBM adoption, yet eSource significantly enhances Risk-Based Monitoring.
  • Pitfalls to avoid in trial conduct and oversight:
    • Treating all data equally; Focus should be on critical data and processes, which should be specified up front.
    • Quality data is defined as the absence of errors that matter, according to Janet Woodcock, M.D. (Director of CDER/FDA) .  Mr. Helfgott used this to emphasize the importance of focusing on the critical data and processes.
    • Not appreciating the nature of data errors.  It is important  to recognize problems that affect multiple sites (i.e., systematic errors) and fix them in real time.
    • Ineffective methods for detecting trends 
    • Avoid using equipment and instruments prone to malfunction
    • Poorly written protocols (with poorly defined endpoints) and eCRFs
 
Finally, I asked Mr. Helfgott how Sponsors and CROs using Risk Based Monitoring can better prepare for audits.  He worked on the algorithm used for auditing sites and he provided these recommendations concerning preparing for audits. 
  • Share your oversight plans with the FDA; even if they don’t respond, you can document you provided the info to the FDA.  If findings are identified as part of an FDA audit and you have submitted the monitoring approach that you follow, the agency is unlikely to issue a warning letter based on audit findings.  


Our Risk Based Monitoring Method (MANA Method) is focused completely on identifying and correcting errors that matter.  We also identify and correct systematic errors.  You can’t expect to find these important errors without specifically designing your oversight approach to do so.  Every Sponsor should be asking:  “Tell me specifically what the errors that matter you are looking for and how will you find them using your current monitoring approach”.  If you are using SDV as your oversight approach, you cannot identify the errors that matter.

If you don’t like the answer you get or are using SDV as your oversight for a critical trial, don’t hesitate to contact MANA RBM.  We offer two approaches to executive oversight to identify errors that matter.  Pmanasco@manarbm.com

 
Read More...

RBM: The Ugly Truth About Clinical Trial Oversight Effectiveness


By Penny Manasco - October 3, 2018


Recently, we published the first of its kind, head to head comparison of our RBM methodology (i.e., the MANA Method) versus traditional onsite SDV.  This pilot study showed the MANA Method was proven superior to SDV. (http://www.appliedclinicaltrialsonline.com/comparing-risk-based-monitoring-and-remote-trial-management-vs-sdv?pageID=1)
In researching the literature to find similar pilot studies to include in the manuscript, the paucity of data comparing the effectiveness of different clinical trial oversight methods was striking.  

Clinical trial management and oversight is a multi-billion-dollar industry with double-digit growth projected to continue for the foreseeable future.  The activities used to conduct oversight have been the focus of regulatory scrutiny since 2013, when the FDA and EMA released guidance documents recommending a different process and focus for trial oversight.  The International Council of Harmonization followed by releasing its new guidance for Good Clinical Practice, which was adopted in 2017 by the EMA and in 2018 by the FDA.   


The guidance recommended focusing oversight on the “errors that matter” (i.e., errors that affect trial integrity, data quality, study participant safety, investigational product management, and human subject protection) rather than the myriad of small, inconsequential errors that are usually identified, queried, and queried again an absurd number of times resulting in “Clinical Research Agonization” for Research Sites (https://www.ashclinicalnews.org/perspectives/editors-corner/contract-research-agonizations/).

In a recent survey of RBM Barriers to Adoption, that I developed and conducted as part of my involvement to work on an expert panel advising the FDA on RBM, the findings identified a myriad of “RBM” implementation approaches, each with different ability to detect “errors that matter”.  

Why are these findings critically important?  Because using RBM methods not effective in finding “errors that matter” puts research participants at risk.  Over 100,000 research participants per year participate in clinical trials (NIH 2015). Of those, a large percentage of the participants have a disease and are enrolled to test the effectiveness of unproven treatments.  If our oversight methods are ineffective, critical errors that affect study participant safety or the interpretation of the effectiveness of medicines, and the risk: benefit new treatments can be impacted.

Using ineffective oversight that misses “errors that matter” produces a myriad of negative consequences. One only needs to look at Warning letters to understand its effects (e.g., J&J, ICON, and ceftiboprole which resulted in the FDA not approving its use in the U.S.).

Additionally, the findings in the large number of Complete Response Letters (CRLs) issued demonstrate the negative impact on subjects, development programs, and companies.  The FDA issued 33 CRL letters between January 2017 and May 2018.  Of the 33 CRLs, 80% were for small companies and 33% included clinical issues.  Missing ‘errors that matter’ can result in the loss of a product and the loss of a company’s value and viability, in addition to unnecessarily putting research participants at undue risk.

We need to scientifically evaluate different oversight approaches to identify the most effective oversight approaches just as we evaluate effective  treatments.  The unmet, medical need for this type of research is greater than any other because it affects all scientific findings we use in evidence-based medicine.   

We give a “shout out” to PaxVax, who worked with MANA RBM to evaluate their traditional onsite SDV compared to MANA RBM’s MANA Method of RBM and Remote Trial Management.  It is only when forward thinking companies, such as PaxVax, encourage and promote this research that we improve our clinical trial processes.

After you read the paper (see the link inserted above), let me know your thoughts.  
 
Read More...

Identifying and Monitoring Risks with ePRO /eCOA Implementation: Real-Life Examples


October 2, 2018

Penelope Manasco, M.D., CEO of MANA RBM, will be speaking at the eCOA/ePRO 2018 meeting on October 23, 2018 in Philadelphia.  The title of the presentation is  Identifying and Monitoring Risks with ePRO/eCOA implementation: Real-Life Examples.
 

RBM: The Face of Quality From a Research Site Perspective


By Penny Manasco - September 4, 2018

Koski et al. published a perspective piece in the NEJM titled Accreditation of Clinical Research Sites—Moving Forward (June 28, 2018).  The authors described their progress in developing standards for clinical research sites as the first step in accreditation.  They initially emphasized  quality management; protecting trial participants’ rights and well-being and facilitating reproducible trial results.
 
I have been working with The CUSP Group, LLC a consortium  22  large tertiary community uro-oncology practices that are facile in conducting all phases of research.  Arletta van Breda RN, MSN, CCRC, CIP, Director of Clinical Research Operations, at CUSP demonstrated this constant focus on quality management.  She described the continuously evolving mechanisms by which  CUSP confirms the correct diagnosis and staging of subjects participating in all GU oncology  studies conducted over past 7 years.  “Data granularity, as well as data fidelity, must go hand-in-hand with patient/subject protections”. 

Not only did the CUSP Group set up processes to align with the new ICHE6(R2), but its process for confirming critical data can teach all of us a lesson.  They do not evaluate potential trials to determine whether it is a registration trial, instead their philosophy is that every trial in which a subject participates is important and the data should be complete and correct.  Even a single blood draw is a “gift” the research subject gives to researchers.  Ms. Van Breda considers it her team’s responsibility, both at the consortium and at the study site level to be sure the data is correct, so the results can contribute to better cures for patients. Additionally, Dr. Neal Shore, Founding Director of CUSP, underscores that the magnitude and significance of data arising from these trials are enormous given their analytical validation, clinical validation, or clinical utility objectives.  “The science cannot advance if the data is not rock solid”.

In the era of patient-centric research discussions, this is a perfect example of how the patient should be front and center.  Patients’ contributions to research are priceless.  The entire clinical research community is responsible to assure the data we collect is correct, the processes follow the protocol, and the controls used to assure subject safety are appropriately monitored, not just checked to confirm a data field was entered.  
 
These same principles form the basis for ICHE6(R2), the new definition of Good Clinical Practice for clinical research.  It provides the guidance for all trials involving human subjects.  If we  conduct research using human subjects, it is our responsibility to make sure the study is conducted correctly, regardless of whether it will be submitted to a regulatory agency.  
 
 
 
 
Read More...