The evolution of Remote Trial Management and Oversight: The MANA Method


From the CEO:

First an introduction.  I am a physician scientist, trained in basic and clinical research at the National Institutes of Health.  I was a VP at a top 5 Pharma company, an executive in two clinical trial software companies, and in 2012, I founded MANA RBM.  MANA RBM is a Clinical Trials Institute and Research Laboratory. 

Over nearly a decade, our mission has been to use scientific methods to design an approach for remote clinical trial management and oversight that efficiently identifies and corrects errors that matter (ETM) remotely and in real time.  Errors that Matter include errors that can affect trial integrity including primary endpoint process, errors in all aspects of Investigational Product management, informed consent errors, safety errors, and inclusion/exclusion errors. As scientists, we tested every aspect of the trial design process and published our results in Applied Clinical Trials. 

Process Development:

Our science-driven oversight approach is different.  We  identify the errors, specific to a protocol, that could “sink” a trial.  We then assure the data needed to determine whether the error had occurred are collected.  Our protocol-specific oversight and analytics are designed to quickly identify the errors and to determine whether the errors were a “one off” or symptoms of a systematic problem. 

Instead of counting the pages SDV’ed, we focus specifically on the errors that occurred and how we can correct those errors as quickly as possible.  For instance, Instead of counting the number of queries that are open, we focus on what queries are being raised, and why, and how we should correct processes to  prevent queries.  

An easy analogy:  we are like the Capital One warnings you receive immediately when something occurs that can affect the use of your credit card.  Traditional monitoring is more like your bank statement—telling you what you have, but not identifying the problems that are lurking.  The MANA Method provides the specific warnings of problems with your trials delivered to your email in near real time.  

 Nearly a decade of innovation and evolution:  

The following is a chronicle of our journey.  It is a story of innovative Sponsors that needed innovative solutions and how tools, processes, and most importantly, a way of thinking about oversight has evolved.

Early EDC trials (and even trials today) focus on Source Data Verification (SDV), a manual direct comparison between a paper “source document” and the data entered later into the EDC.  The problem with this method is that the important errors are missed because of the way the data are being evaluated.   

As anyone involved in the COVID-19 pandemic knows, how you look at the data will tell you very different things.   Your oversight methods will determine what you can find and how fast.  When you design your oversight to specifically find the errors that can affect trial integrity and to determine whether they are systematic, you have much more power  to find and correct errors fast.   

Delay in finding errors was also due to delayed data entry, which is why the FDA released its eSource Guidance in 2009.

In 2012, our first technology step was to move from EDC to an eSource system which allowed us to see data in near real time.    No more SDV because there were no source worksheets—in fact this taught us the need for a new, more effective monitoring approach.  We could now monitor in real time and our oversight focused on how to analyze the ETM to identify errors faster without visits to the sites.    Dr. Mary Flack and Dr. Jakub Simon at Nanobio helped to shape our thoughts about “errors that matter (ETM)” and site training.   We designed specific reports and data visualizations that allowed us to quickly find the ETM, if they occurred,  for the study.  We could respond to the site within days, delivering quick focused retraining as needed.  By using these innovative approaches, we dropped the deviation rate they had seen in traditionally run trials by 50%.  We also added an eISF and eTMF to do remote monitoring of informed consents  within 5 days of a subject visit using certified copies of the informed consent We also implemented remote ICF review and found an error on the IRB consent in 3 days rather than months later—saving the sites hours of rework.  We published all of this work at the DIA in 2014.

But there were a lot of other things that we needed to know that had traditionally resided in spreadsheets in a project manager’s office, or in inaccessible documents at the research site. We use the principal that if it is important to analyze, we needed the data in an analyzable format.   This led to a big breakthrough—our Site Tracker Analyzing Risk (STARTM).  STARTM allowed us to collect critical data that we needed to monitor—data that was site-level data, not subject level data.   We designed a database that can be built in traditional EDC systems so it could be customized to each study, a critical requirement.   STARTM allowed us to monitor delegation logs, debarment checks, IP receipt, and a myriad of other aspects of trials that is usually unavailable for remote review.  STARTM is a critical component of remote trial management.

But can you take this and “run” with it?  MANA RBM did that experiment too.  We worked with Dr. Richard Erwin, Michelle Pallas, and Dr. Lisa Danzig to set up this system so they could run a trial with over 900 subjects using a very small team—almost completely monitored remotely.  The result – a resounding yes.  The monitors even called our approach “fun monitoring” because it provided a superior approach to identifying errors.  MANA RBM was a finalist in the Clinical IT Best Practices Award in 2016 for this effort.

Finally, we tested, with our colleagues at PaxVax, and internally financed the pilot study to determine whether MANA RBM’s risk-based monitoring approach was as effective as SDV in finding Errors that Matter and systematic errors.  The answer – another resounding YES!

We identified issues that had been missed with traditional SDV.  We published the paper in Applied Clinical Trials ( Manasco et al., Risk-Based Monitoring Versus Source Data Verification, Applied Clinical Trials  Nov 2018 pp 8-13 ).  This is the only paper we have found that performed a direct comparison with SDV, showing what others have identified indirectly.  

But tools are only part of the challenge.  We evaluated the competencies needed for a monitor using RBM and published those. (  We evaluated barriers to adoption of Risk Based Monitors and published those ( ).  We developed a new tool for faster, more comprehensive subject oversight and published the results of our assessment ( These represent only a few of the papers we have published in Applied Clinical Trials  about adopting remote trial management and oversight.