Get the latest inspection trends and ideas right in your inbox.
All Resources
EMA’s Guideline on Computerized Systems and Electronic Data in Clinical Trials: Good Documentation Practice
The European Medicines Agency (EMA) recently adopted the final version of its Good Clinical Practice Inspectors Working Group (GCP IWG) guideline on the use of computerized systems in clinical trials. It does a very nice job of articulating, in plain language, the issues surrounding data integrity, validation, and security that are “industry standard” but not specifically addressed in 21 CFR Part 11, Good Clinical Practice, or FDA guidances. We’re going to cover some key points in this next series of blog posts. First up: Good Documentation Practice.
Although Good Documentation Practice is essential to quality, it is not currently required by any US regulation (including GMP!), and it was added to ICH GCP E6 revision 2 only in reference to source documentation maintained by sites:
4.9.0 The investigator/institution should maintain adequate and accurate source documents and trial records that include all pertinent observations on each of the site’s trial subjects. Source data should be attributable, legible, contemporaneous, original,
accurate, and complete. Changes to source data should be traceable, should not obscure the original entry, and should be explained if necessary (e.g., via an audit trail).
Most GDP/ALCOA+ training emphasizes controls put in place for handwritten records (for example, no sticky notes or Wite-Out). EMA’s computerized system guideline explicitly ties the concept of data integrity for computerized data to “ALCOA++” principles of Good Documentation Practice:
“Attributable” includes the ability to tell which person or system generated the data, which means that the audit trail should clearly reflect the originator of each action, whether that originator is a person or a system. This requirement has implications for situations in which data are transferred from one system to another.
For computerized systems, the requirement for “legibility” includes being able to reverse any changes to data, such as compression, encryption, or coding. For example, if you zip up a file to store it, you need to be able to unzip it to read it.
Data that are captured in a dynamic state are “original” only if they are maintained in that state or replaced with a certified copy. For example, eCRF data is “original” when maintained in the EDC application, although a certified copy such as a .pdf representation may replace it. If you’re decommissioning your EDC or IRT system, then, you need to ensure that the data exports you receive have all the data and metadata required to be a certified copy.
Data collected by a computer system should be “at least as accurate as those recorded on paper,” per the guideline. This means that any coding “should be controlled,” and data transfers should be validated. Metadata “could…contain information to confirm…accuracy” of the data. This bit is difficult to parse – does it refer to timestamps on audit trails, which help confirm data accuracy by placing it in context? Or does it suggest that metadata could include supplementary data to confirm accuracy, such as supplying a missing version date? We’ll keep an eye on upcoming reflection papers and Q&A to discern the intention.
“Completeness” may include “preserving the original context.” Again, this goes back to the concept of originality; a copy of the data may not preserve all the features of the dynamic state, such as when a .pdf of a completed eCRF doesn’t show all the drop-down menu choices.
Checks for data “consistency” should be implemented throughout data capture, processing, and migration, to detect and avoid “contradictions.” For example, if a dataset is transferred from one vendor or another, this requirement suggests that a QC review to verify that the data sent was an exact copy of the data received is required.
“Enduring” and “available” mean that data should be stored in such a way that they are not corrupted, and that the team should be able to find data when needed, which highlights the importance of an archived data inventory – no point in storing data securely if no one knows where it is.
Changes to data should be “traceable” throughout its lifecycle. Our systems frequently do a great job of tracing data changes through the audit trail, but when we migrate or transfer data, we need to put manual processes in place to maintain that traceability.
Related Posts
All Resources
What’s New in E6 R3? Sponsor Responsibilities, Part 3 – Sponsor Oversight