Return To Blog

RBM: The Ugly Truth About Clinical Trial Oversight Effectiveness


By Penny Manasco - October 3, 2018


Recently, we published the first of its kind, head to head comparison of our RBM methodology (i.e., the MANA Method) versus traditional onsite SDV.  This pilot study showed the MANA Method was proven superior to SDV. (http://www.appliedclinicaltrialsonline.com/comparing-risk-based-monitoring-and-remote-trial-management-vs-sdv?pageID=1)
In researching the literature to find similar pilot studies to include in the manuscript, the paucity of data comparing the effectiveness of different clinical trial oversight methods was striking.  

Clinical trial management and oversight is a multi-billion-dollar industry with double-digit growth projected to continue for the foreseeable future.  The activities used to conduct oversight have been the focus of regulatory scrutiny since 2013, when the FDA and EMA released guidance documents recommending a different process and focus for trial oversight.  The International Council of Harmonization followed by releasing its new guidance for Good Clinical Practice, which was adopted in 2017 by the EMA and in 2018 by the FDA.   


The guidance recommended focusing oversight on the “errors that matter” (i.e., errors that affect trial integrity, data quality, study participant safety, investigational product management, and human subject protection) rather than the myriad of small, inconsequential errors that are usually identified, queried, and queried again an absurd number of times resulting in “Clinical Research Agonization” for Research Sites (https://www.ashclinicalnews.org/perspectives/editors-corner/contract-research-agonizations/).

In a recent survey of RBM Barriers to Adoption, that I developed and conducted as part of my involvement to work on an expert panel advising the FDA on RBM, the findings identified a myriad of “RBM” implementation approaches, each with different ability to detect “errors that matter”.  

Why are these findings critically important?  Because using RBM methods not effective in finding “errors that matter” puts research participants at risk.  Over 100,000 research participants per year participate in clinical trials (NIH 2015). Of those, a large percentage of the participants have a disease and are enrolled to test the effectiveness of unproven treatments.  If our oversight methods are ineffective, critical errors that affect study participant safety or the interpretation of the effectiveness of medicines, and the risk: benefit new treatments can be impacted.

Using ineffective oversight that misses “errors that matter” produces a myriad of negative consequences. One only needs to look at Warning letters to understand its effects (e.g., J&J, ICON, and ceftiboprole which resulted in the FDA not approving its use in the U.S.).

Additionally, the findings in the large number of Complete Response Letters (CRLs) issued demonstrate the negative impact on subjects, development programs, and companies.  The FDA issued 33 CRL letters between January 2017 and May 2018.  Of the 33 CRLs, 80% were for small companies and 33% included clinical issues.  Missing ‘errors that matter’ can result in the loss of a product and the loss of a company’s value and viability, in addition to unnecessarily putting research participants at undue risk.

We need to scientifically evaluate different oversight approaches to identify the most effective oversight approaches just as we evaluate effective  treatments.  The unmet, medical need for this type of research is greater than any other because it affects all scientific findings we use in evidence-based medicine.   

We give a “shout out” to PaxVax, who worked with MANA RBM to evaluate their traditional onsite SDV compared to MANA RBM’s MANA Method of RBM and Remote Trial Management.  It is only when forward thinking companies, such as PaxVax, encourage and promote this research that we improve our clinical trial processes.

After you read the paper (see the link inserted above), let me know your thoughts.