1. WHITE PAPER
From Here to Risk-Based Monitoring
A framework and checklists for a successful transition
2. White Paper: From Here to Risk-Based Monitoring
Contents
1. Understanding Risk-Based Monitoring Solutions in Full   
Limitations of Previous Comparisons
The Importance of Real-Time, Forward-Looking Management
Suggested Framework for Comparison
2. Monitoring Each Study on its Own Terms  
The Starting Point: a Risk-Based Monitoring Plan
The Importance of Midstudy Adjustments
1
1
1
2
 2
2
3
3. Looking Backward: Error Detection and Correction  
 4
Introduction
Audits
Central Statistical Monitoring
Focused Continuous Central Monitoring
The Old Standby: Source Data Verification
4
4
4
5
6
4. Monitoring in the Moment: Real-Time Quality Management  
Quality Metrics
Site Performance Metrics
Key Risk Indicators
Fixed Performance Indices
The Speed of Monitoring Adjustments
 7
7
7
7
7
8
5. Ensuring Future Success: Error Prediction and Prevention  
 9
Beyond Reactive Monitoring
Error Prediction Based on On-Screen Data Checks
Dynamic Performance Indices
Indirect Leading Indicators
9
9
9
10
6. Planning the Right Transition for You  
Implementing a Solution In-House
Selecting an Outsourced Solution
7. Summing Up   
WHITE PAPER
 10
10
11
12
3. White Paper: From Here to Risk-Based Monitoring
1. Understanding Risk-Based Monitoring Solutions in Full
Limitations of Previous Comparisons
Data quality, patient safety and optimal resource allocation are the primary goals of risk-based monitoring. In the two
years since publication of the relevant EMA reflection paper and FDA guidance, methodologies, products and services
for risk-based monitoring have proliferated and diversified. The biopharmaceutical industry can choose from a variety
of general approaches and specific practices to achieve the desired goals. This article provides a framework to assist
biopharma and medical device companies in selecting and implementing a risk-based monitoring approach or selecting
a service provider based on offerings in the marketplace. A comparison of specific risk-based monitoring products and
services is beyond the scope of this article.
Most comparisons of risk-based monitoring approaches have focused overwhelmingly on source-data verification (SDV)
– what percentage of different types of information on which subjects and site activities must be verified or reviewed
and how. The approach to SDV is central to any monitoring approach, risk-based or otherwise. However, as befits a
process that is largely unchanged from the era of paper-based clinical trials, SDV is mostly about detection and correction
of errors after the fact. Focusing the discussion of risk-based monitoring on SDV tends to understate other important
aspects of risk-based monitoring such as proactive quality management and error prevention.
The Importance of Real-Time, Forward-Looking Management
In the author’s view, a comparison of risk-based monitoring approaches should devote substantial attention to
prospective features that enable proactive management to ensure that each study meets goals for data quality, patient
safety and optimal resource allocation. Error detection and correction remain essential but the most effective risk-based
monitoring approaches shift the primary focus from the past to the present and future. Accurate understanding of
current study status is vital. Therefore, a risk-based monitoring approach should track data quality and site performance
on a basis as close to real-time as possible. In addition to proactive management based on current information, riskbased monitoring should include anticipatory management based on predictive modeling.
Some may think extending the comparison of risk-based monitoring approaches in this manner ventures beyond
monitoring and into trial management. That is true. However, technology makes the extension unavoidable. The
distinction between monitoring and trial management was clear in an age of paper trials, but the advent of electronic
data capture (EDC) and Electronic Clinical Trial Management Systems (eCTMS) has unified once separate activities. In
the electronic age, capabilities for tracking and predicting data quality and site performance are as essential as SDV to
achieving the goals of risk-based monitoring. Therefore, this paper presumes that a comparison of risk-based monitoring
approaches should encompass upstream and forward-looking activities that affect data quality.
WHITE PAPER
1
4. White Paper: From Here to Risk-Based Monitoring
Suggested Framework for Comparison
This article compares risk-based monitoring approaches and practices in the following areas:
• Tailoring the Approach for Each Study
• Looking Backward: Correcting and Detecting Past Errors
• Monitoring in the Moment: Real-Time Quality Management
• Ensuring Future Success: Error Prediction and Prevention
• Making the Transition: Issues in Adoption and Implementation
This paper takes the view that SDV, while extremely important, is one form of error correction and detection.
2. Monitoring Each Study on its Own Terms
FDA guidance stresses the importance of performing a risk assessment to identify issues specific to the trial prior
to authoring a risk-based monitoring plan. Such a plan foresees the need to adjust monitoring activities based on
circumstances observed during the trial. However, problems may arise in areas where risks initially seemed low. For this
reason, individualization is important during the trial as well as in the planning stage.
The Starting Point: a Risk-Based Monitoring Plan
Risk-based monitoring approaches typically set allowable error rates for tiers of subjects, track the number of errors and
determine whether monitoring adjustments are required to keep the incidence of errors in check. For example, tiered
monitoring approaches often rely on tables from QA practice that specify the number of acceptable errors for critical
data on subjects: 1-10, 11-100 and 101. This is a reasonable starting point for a monitoring plan.
However, a risk-based monitoring plan should identify the specific issues and risks involved in a study and especially
the potential effects of different errors on the final statistical analysis and regulatory submissions. The study team must
assess the likely effect of each type of error on the final analysis as well as the worst-case effect. Errors in a certain
field related to a primary endpoint may be of paramount importance. The magnitude of errors or variability of data in
fields related to a primary endpoint may affect the reliability of results just as the incidence of errors does. The need to
understand the likely and worst-case effects of different factors on the final analysis explains why biostatisticians play
such an important role in risk-based monitoring.
WHITE PAPER
2
5. White Paper: From Here to Risk-Based Monitoring
At a minimum, a risk-based monitoring plan should define
the following:
• Critical data
• Key Risk/Quality/Performance Indicators (KRIs/KQIs/KPIs)
• Potential error types and associated remedial actions and action
triggers
• Allowable error rates
• nitial SDV approach, including an algorithm for selecting
I
subjects and data to SDV and % SDV for subject groups, data
types, ICFs, study procedures, etc.
• The basis for adjusting the SDV approach during the study
• The basis for scheduling monitoring visits and adjusting the
visit schedule
Section 4 provides detailed information on a variety of metrics to
consider when authoring a risk-based monitoring plan. For
simplicity, this document will use “Key Risk Indicators” to denote
measures that industry practices sometimes calls Key Quality
Indicators or Key Performance Indicators.
The Importance of Midstudy Adjustments
Optimal monitoring requires adjusting to actual circumstances
observed during a study. If the available CTMS includes an integral
adjustment mechanism, the initial monitoring plan may define how
such a mechanism will apply to the current study. Other types of
substantial changes in the monitoring approach should be reflected
in amendments to the monitoring plan.
Checklist 1:
Individualization for the Specific Study
At startup, including monitoring plan
• Risk assessment, including risk
to analysis
• Choice of metrics
̶
̶ Derived from past trials?
̶
̶ Specific to study?
• Flag critical fields?
• Set Acceptable Quality Levels/
error rates for each:
̶
̶ Primary endpoint?
̶
̶ Secondary endpoint?
̶
̶ Data Field?
Adjustments during trial
• Monitoring basics
̶
̶ Frequency
̶
̶ Intensity
̶
̶ Monitoring Method
»» Central (remote)
»» Onsite
»» Mixed
• SDV targeting (see separate table)
• Metrics
̶
̶ Change Key Risk Indicators?
̶
̶ Change index components and
weighting?
̶
̶ Correlate indices and
components with actual
quality/performance?
When a CTMS supports use of Key Risk Indicators, most monitoring approaches utilize the initially identified set
of KRIs throughout a study. However, an alternative approach offers greater flexibility, allowing adjustments to
KRIs during the trial based on continuous assessment of the predictive value of each indicator for actual site
performance and data quality. Such adjustments may change weights for each component metric or extend
to replacing one metric initially designated as a KRI with another that has proven a better predictor of site
performance and data quality. To support such flexibility in KRIs, a risk-based monitoring system may incorporate
a model that assesses the predictive power of various metrics, enabling changes in indices and dashboards that
continually focus the
WHITE PAPER study team on the activities that have proved most important to the success of the trial.
3
6. White Paper: From Here to Risk-Based Monitoring
3. Looking Backward: Error Detection and Correction
Introduction
The management utility of methods of error detection and correction depends on time of utilization. The most definitive
and accurate way to detect errors is by examining and analyzing the entire study database at study close. However, the
only management adjustment possible at that stage is to avoid a regulatory submission based on low-quality data. At the
other extreme, there is not enough data early in a study’s earliest days to support many approaches to error detection
and correction. Thus, time of utilization is central to any assessment of methods for error detection and correction.
Audits
While it is important to detect errors before a regulatory submission, error detection during an audit shortly before
database lock may come too late to save a study. Such an audit may produce information on site performance that is as
definitive as a medical examiner’s determination of the cause of death. Audits are included here because a risk-based
monitoring plan likely includes triggers for audits. Audits are clearly all about error detection rather than correction.
Central Statistical Monitoring
As the FDA risk-based monitoring guidance suggests, central statistical monitoring is a more effective way to identify
some types of errors than onsite SDV and is a wise precaution as reliance on remote monitoring increases. The question
about central statistical monitoring is not whether it can identify errors but when. Central statistical monitoring
packages have the disadvantage of requiring substantial amounts of trial data. Such packages typically operate in batch
mode, perhaps running only a single time after collecting 80% of trial data or at the very end. Such packages have the
advantage of being able to analyze a high percentage of data and provide a valuable safety net to detect anomalies in
unmonitored data. This may allow correcting problems in planned regulatory submissions or prevent a submission that
is doomed from the outset because of quality issues. Like late audits, late central statistical monitoring may provide only
an authoritative postmortem for the current study. If results are favorable, central statistical monitoring can increase
confidence in earlier monitoring of the trial by other means and in a planned regulatory submission.
In principle, central statistical monitoring could contribute to trial management through more frequent runs. However,
the value of the analysis depends on the volume of data available, likely preventing the use of central statistical
monitoring as a tool for managing a study to meet quality goals.
WHITE PAPER
4
7. White Paper: From Here to Risk-Based Monitoring
Focused Continuous Central Monitoring
The central monitoring that matters most to the success of risk-based monitoring identifies problems continuously from
the outset and provides a basis for rapid reaction to emerging issues. This type of central monitoring is continuous in the
sense that members of the study team perform a variety of checks every day, not that all types of checks are performed
continuously. Focused continuous central monitoring can include rules-based analytic checks, cross-CRF consistency
comparisons, checks for outliers, checks of dates vs. treatment windows and many other types of checks.
Some central monitoring processes are useful from the first patient visit, such as those that detect inconsistencies
between CRFs on the same subject or between subjects for the same visit. A single date for a patient visit that occurs
outside a treatment window may reflect a data entry error or a serious issue in site performance. Flagging an out-ofwindow visit allows immediate investigation to identify and correct the problem. In some instances, it is necessary to
establish a baseline for central checks. However, it is often possible to establish a useful baseline early in a study based
on the first few subjects at each site.
The keys to focused continuous central monitoring are to:
Checklist 2:
Central Monitoring Approaches
Central statistical monitoring
• Pattern detection
• Deviation detection
• Minimum data requirements?
• Frequency?
Focused central monitoring
• Rules-based
̶
̶ Rules-libraries available?
̶
̶ Frequency?
Study-specific data checks
Validations vs. protocol requirements
Standard DM checks
Cross-CRF consistency comparisons
• Identify as many potential problems as possible in advance
• Establish processes for detecting such problems
• Maintain relentless vigilance for signs of any developments or events
that might compromise data quality.
Rules-based analytic monitoring encapsulates experience with previous
trials. Each rule reflects a previously observed pattern associated with
a specific type of error. For example, a rule may flag repeated values
for vital signs for a single subject or repeated values for all subjects on
a single date, both highly unlikely to result from actual measurements.
The value of central rules-based analytic monitoring depends on:
• Completeness of the library of rules
• Applicability of the rules to the current study
• Frequency and timeliness of the rules-based analysis.
As with central statistical monitoring, the value of rules-based analytic
monitoring as a tool for achieving quality goals is limited unless the
analysis occurs early and often. Inclusion of rules-based monitoring in
this section rests on the assumption that rules-based checks will happen
frequently or continuously.
Checklist 2 summarizes key elements and considerations in
central monitoring.
WHITE PAPER
5
8. White Paper: From Here to Risk-Based Monitoring
The Old Standby: Source Data Verification
This section concerns the range of SDV approaches supported by a
risk-based monitoring system or service. As noted in the introduction,
existing comparisons of risk-based monitoring approaches typically
focus on SDV approach. Key dimensions of SDV are the selection
algorithm and the granularity of selection. In some instances,
features such as specific selection algorithms and field tagging may
be native to a trial-management or EDC system. However, it is also
possible for biostatisticians to define and implement SDV algorithms
in collaboration with the study management team. If implementing
a risk-based monitoring product in-house, native support for such
features may be particularly important in the absence of internal staff
with the capabilities and time to implement desired algorithms and
field tagging.
Common approaches to SDV include random selection of data
on tiers of subjects and declining SDV after early monitoring
demonstrates acceptable error rates. A purely random approach
ignores the central principle of identifying areas that pose the
greatest risk to data quality and focusing monitoring
efforts accordingly.
WHITE PAPER
6
Checklist 3:
Representative SDV Parameters
Selection algorithms
• Triggered
• Targeted
• Random
• Declining
• Tiered
• Mixed
Granularity
• By site
• By subject
• By CRF
• By field
• By endpoint
9. White Paper: From Here to Risk-Based Monitoring
4. Monitoring in the Moment: Real-Time Quality Management
Immediate feedback is central to improving any process. Because feedback from SDV is far from immediate, effective
real-time quality management must utilize a variety of study-specific Key Risk Indicators as a basis for controlling factors
that play an important role in ensuring data quality, patient safety and efficient resource allocation.
Quality Metrics
The ability to set target Acceptable Quality Levels (AQL) for data related to an endpoint or for a specific field and then
to track actual quality continuously during the trial provides a foundation for achieving quality goals. This approach
incorporates tracking of such measures of variability of data for individual fields or for the set of fields that define a
primary or secondary endpoint. Metrics of this type focus monitoring attention in areas of greatest importance to the
analysis and regulatory submissions.
Streaming information is ideal for effective management to achieve quality
goals. If management relies on periodic reports, errors can escape detection
and proliferate between reports. Table 4 summarizes metrics useful in realtime quality management as well as some important considerations involved in
selecting such metrics.
Checklist 4:
Metrics for Real-Time Quality
Management
Real-time streaming or periodic?
• If streaming, currency
Site Performance Metrics
Experience shows that sites that are slow to enroll patients, enter data from
patient visits or resolve queries are more likely to produce low-quality data. Site
performance metrics like those provided in Checklist 5 enable the study team to
identify potential problem sites and intervene early to prevent a compromise of
data quality.
• If periodic, frequency
Data quality levels vs AQL by:
• Field
• Critical field
• Endpoint
• Subject tier
Key Risk Indicators
• Site
Key Risk Indicators (KRIs) (also called Key Performance Indicators or Key Quality
Indicators) are measures of error rates and data quality that are designated as
important for management of the current study. The value of KRIs is availability
before accumulation of sufficient data for meaningful measures of parameters
such as error rates and variability. Some KRIs, such as query rates and time to
query resolution, apply universally. Study planners may identify additional KRIs
based on experience with previous trials in the same therapeutic area or,
WHITE PAPER the initial risk assessment for the current study.
preferably, based on
• Monitor
7
• Region
Queries/100 fields
Primary endpoint queries
Critical field queries
10. White Paper: From Here to Risk-Based Monitoring
Fixed Performance Indices
Performance indices combine selected Key Risk Indicators to reflect
site performance on activities related to data quality. Some systems
utilize the same performance indices for all studies. Others define
performance indices during startup, often based on historical
information, and leave the indices unchanged throughout a study,
with both the component measures and weights for each fixed. Fixed
performance indices are a valuable basis for study management.
However, dynamic indices based on the actual predictive value
of component measures are far more valuable. Section 5 covers
dynamic performance indices.
The Speed of Monitoring Adjustments
Periodic Adjustments
Management based on period reports is an artifact of the age of
paper-based studies. Even today, many management approaches rely
on monthly or quarterly reports. With such long reporting cycles,
errors may escape detection for long periods. Worse yet, similar
errors may proliferate before the study team realizes that there is
a problem. Another common problem with periodic reports is that
they often provide data tables that are a starting point for analysis
rather than a basis for action.
Continuous Adjustments
Checklist 5:
Site-Performance Metrics for Real-Time
Management
Real-time streaming or periodic?
• If streaming, currency
• If periodic, frequency
Observational or actionable?
Common metrics:
• Time to query resolution
• Time from patient visit to CRF
entry
• Protocol deviations
• Protocol deviation under/over
reporting
• AEs under/over reporting (by
comparison with study median)
• SAE queries
• AE queries
• Time to SAE reporting
• Enrollment rate
• Screenfail rate
• Discontinue rate
The ideal process for error detection and correction is continuous
rather than periodic, utilizing streaming information for early detection
of problems and precise planning to enable immediate response. The
ability to make continuous adjustments depends on:
• Specifications of clear decision criteria (triggers) in the monitoring plan
• Specification of specific remedial actions for foreseeable types of quality issues
• Availability of actionable information (as opposed to raw data or tabular reports) relating to triggers
• Readiness to execute predefined responses
• A feedback loop to track the effectiveness of responses and make further adjustments if necessary
• Vigilance for signs of unforeseen issues
• For PAPER
WHITE some types of data, accumulation of sufficient information for meaningful interpretation.
8
11. White Paper: From Here to Risk-Based Monitoring
5. Ensuring Future Success: Error Prediction and Prevention
Beyond Reactive Monitoring
While essential, the query process as an industry institution promotes a mentality that accepts errors as inevitable
and focuses on piecemeal corrections. While a continuous process of error detection and correction improves on
customary reliance on periodic reports, a continuous process for error prediction and prevention enables a leap beyond
a reactive monitoring model, shifting the focus from error correction to prevention. Such a process enables the study
team to investigate suspected issues immediately and, upon confirmation of a problem, to intervene before an issue
can compromise study data. Error prediction heightens and focuses vigilance, enabling the study team to correct errors
at the source and prevent recurrences, keeping data as clean and accurate as possible throughout a trial. This not only
minimizes error and variability, but also increases efficiency by reducing the need for corrective actions. However,
emphasis on error prevention requires an effective mechanism for error prediction.
Error Prediction Based on On-Screen Data Checks
The frequency of data-entry errors rejected by on-screen data checks may provide an early indication of site
performance problems. By definition, errors rejected at this stage do not reach the study database. However, information
on range-check failures is available immediately and thus has management value if it turns out that such failures track
with real site performance issues. On the other hand, inaccurate keyboarding by designated site personnel may NOT
reflect an investigator’s ability to ensure proper performance of study procedures and collection of accurate data on
eligible subjects. For this reason, results of on-screen data checks may not be a reliable indicator of actual or future
site performance.
Dynamic Performance Indices
Ideally, a Site Performance Index is composed of elements that are in fact predictive of site performance. While
this is the goal with fixed performance indices, the selected component measures and weights do not always prove
predictive. A model that assesses the predictive value of a range of candidate metrics enables adjustments to both
the components and weights of a performance index. This tunes the performance index to the realities of the current
study and provides a basis for reports and dashboards that focus the study team on the factors that are proving most
important to data quality and study success. In effect, dynamic performance indices provide continuous updates to the
initial risk assessment, thus focusing monitoring activities on actual rather than expected risks and providing trending
information and a basis for predicting future sources of error. Typical risk-based monitoring is only as good as the initial
WHITE PAPER
9
12. White Paper: From Here to Risk-Based Monitoring
Checklist 6:
Metrics for Error Prediction and Prevention
Rules of thumb
risk assessment. Risk-based monitoring
based on dynamic performance indices
ensures that risk-based monitoring is
functioning as intended.
• Errors at on-screen data checks
Dynamic performance indices
• Consist of most predictive elements proportionally weighted
• Components adjusted based on actual predictivity
• Weights adjusted based based on actual predictivity
• Provide trending and predictive information
Indirect measures
• Available without visiting site?
• Known independently of site reporting?
• Predictive of direct measures
Indirect Leading Indicators
Another approach to obtaining
indications of site performance as
early as possible is identification of
indirect or surrogate measures of site
performance. When such indirect
indicators are available continuously
and are available without the need for
monitoring visits, they become valuable
management tools for achieving
quality goals.
6. Planning the Right Transition for You
Companies face a choice between in-house implementation and outsourcing. Each involves challenges. In-house
implementation must contend with the need to evolve existing systems or adopt new ones. While outsourcing clinical
trials allows a choice among many CROs, relatively few CROs offer robust risk-based monitoring during the current
transitional period.
Implementing a Solution In-House
In-house implementation of risk-based monitoring presents substantial challenges whether adopting and implementing
new systems or evolving in-house systems developed before the current emphasis on risk-based monitoring. Immediate
use of risk-based monitoring on in-house systems may require accepting substantial functionality limitations in systems
not designed for the purpose. A fuller implementation will likely take substantial time and investment. Implementing
a homegrown system provides the opportunity to focus on individual company needs and preferences but is likely the
most costly and time-consuming approach.
When choosing commercial software integrated in an EDC system or eCTMS, features in the current release of the
software determine the possibilities in the short term. If the current implementation of the chosen package lacks desired
WHITEthe customer must persuade the developer to include the desired functionality in a future release. That release
features, PAPER
10
13. White Paper: From Here to Risk-Based Monitoring
may become available months or years after recognition of the need for additional features. Furthermore, a commercial
software vendor will likely prioritize revisions based on the size and importance of customers and the number of
customers requesting similar functionality.
Furthermore, the transition to any new technology always brings the risk of picking the wrong product or vendor. It is
common during technology transitions to invest in a product that proves less capable than expected or a vendor that
makes design choices that prove suboptimal as the technology and market evolve. This argues for a thorough, deliberate
selection process and a strategy of maintaining as much flexibility as possible.
Selecting an Outsourced Solution
Checklist 7:
Issues in Adoption and Implementation
In-House Implementation
• Identify desired feature set
• Compare offerings
• Select commercial package
• Workarounds for missing features?
• Timeline
• Installation
̶
̶ Process development
̶
̶ Pilot project
̶
̶ Revisions based on lessons learned
̶
̶ Deployment
̶
̶ Staff training
̶
̶ Cultural transformation from 100% SDV
̶
̶ Maintream use
Outsourced
• Identify CROs with:
̶
̶ Full understanding of risk-based approaches
̶
̶ Appropriate technology
̶
̶ Track record
̶
̶ Trained staff
̶
̶ Completed cultural transformation
̶
̶ Ability to individualize to sponsor and study
• Send RFP
WHITE PAPER
11
As with most complex software products, the process for
obtaining risk-based monitoring through outsourcing is
much easier than in-house adoption and implementation.
It is necessary to develop an understanding of the
potential possibilities, pitfalls and benefits, to author an
appropriate RFP and to identify CROs with the capability to
propose and deliver solutions. Implementation is someone
else’s problem.
One of the advantages of outsourcing during a technology
transition is the opportunity to learn and the flexibility to
change vendors based on initial experience. If the initial
experience is disappointing, the next RFP can reflect
lessons learned and target additional or different
service providers.
Depending on the match between CRO capabilities and
identified monitoring needs, outsourcing may enable
immediate adoption of fully functional risk-based
monitoring. Outsourcing is likely the preferred path for
small pharma and biotechs. Indeed, such companies may
realize a competitive advantage during the transition to
risk-based monitoring. While larger pharma companies
and many CROs and CTMS companies struggle with the
challenges of in-house technology migration, companies
unencumbered by existing infrastructure can identify
nimble partners that promise the benefits of risk-based
monitoring. Then it is a matter of ensuring that the
selected partner delivers the promised value.