In previous posts in this series, we looked at why and how sponsors and CROs collect protocol deviations. In this post, we will look at we classify and categorize them.
Why classify and categorize? This question calls back to why we collect protocol deviations at all. Classification, or assigning a severity rating to each deviation, serves all three of our protocol deviations: compliance, reporting, and analysis. Categorization, or bucketing protocol deviations into categories to improve assessment, supports compliance evaluation, and, to some extent, reporting to IRBs. For our purposes, we’ll call these activities C&C, although it is not a standard industry acronym.
In CRA-centered collection processes, the CRA may enter the initial C&C when entering the deviation. In most cases, a broader team including Clinical Operations, Medical, Data Management, and Biostatistics work together to review those C&Cs, or to perform the initial C&C. To facilitate review, most teams export a list of PDs into a spreadsheet on a regular basis and assign or revise the C&Cs during a meeting. A note-taker captures the assignments/revisions on the spreadsheet. Then, these data may be entered back into the CTMS or EDC system, or if the system has not been set up to capture these data, they may be maintained on the spreadsheet, which is versioned and then stored on a shared repository.
These processes present a host of data integrity concerns. Auditor and inspectors frequently raise the following concerns:
- Is the export of protocol deviations from the EDC or CTMS system validated? For EDC systems, this is usually not an issue, but not all CTMS systems have validated reporting capabilities.
- If the process requires certain roles (e.g., Medical and Biostats) to review protocol deviations, how do we know which PDs they reviewed, when the review took place, and the substance of the discussion? Some teams maintain meeting minutes and output reviewed, but if documentation is not thorough, it could be difficult to prove that Biostatistician X reviewed Protocol Deviation Y. Many teams do not maintain any documentation of discussions apart from the updated tracker, which does not meet GDP/ALCOA+ requirements for attributability.
- How is the spreadsheet controlled from the time it is downloaded until the C&C data is entered/revised? Another way of asking the question is how would we know if data in the spreadsheet were changed? Some teams may save the spreadsheet under a different name every time a change is made, to provide manual traceability, but if the resulting spreadsheets are stored in Excel format, they can be overwritten without detection. Some combination of a manual change indicator, frequent versioning, and storage in a document control system that saves the version history is required.
- For processes that include C&Cs entries or updates in the EDC or CTMS, how are changes in these systems controlled? Most EDC systems have a robust audit trail, but not all CTMS systems do.
In addition to putting the controls described above in place to better control the PD process, we can incorporate QC steps into our process to catch errors as they occur. For example,
- During export we might institute a simple check to verify that the number of PDs exported to the spreadsheet equals the number in the EDC or CTMS system.
- Where a subset of PDs is exported for review meetings – for example, PDs that are “new” since the last meeting – a check might be performed regularly to verify that all PDs entered into the EDC or CTMS system have been evaluated during a meeting.
- PD meeting minutes could be QCd for all required elements prior to finalization.
- The team could institute a regular “trace-back” of PDs, starting with the first tracker generated in a particular time period, to verify that all changes to the PD throughout its history are attributable, all PDs have been reviewed, and the final PD reflects the most recent data in the tracker.
In the next blog post, we’ll discuss tracking, trending, and responding to PDs, which suffers from many of the same data integrity issues as the classification and categorization process.