We’re all familiar with the clinical research aphorism, “If it wasn’t documented, it didn’t happen.” As clinical studies transition from hybrid to mostly electronic environments in the shadow of the work-from-home pandemic, what constitutes valid documentation?
To frame our discussion, we’ll draw from our old standard, ALCOA+, focusing particularly on the requirements for documentation to be attributable and contemporaneous.
In the all-paper world, this was relatively simple. To document that something happened, you needed a paper record, typically a form that the user filled out to capture a certain activity. A wet-ink signature fulfilled requirements for attributability, because handwritten signatures were considered to be unique to their owners, and the date on the signature showed that it was contemporaneous with the record.
When 21 CFR Part 11-compliant computer systems were implemented, the audit trail fulfilled requirements for attributability and contemporaneity, because it tied each action to a user, date, and time. Nonetheless, to mimic the paper process, we added the extra capability of the electronic signature to attest to the validity of the record captured by the audit trail.
Paper is cumbersome and Part 11-compliant computer systems are expensive, so teams try to work around these limitations by “automating” paper workflows using electronic tools. They avoid the requirement to validate these systems by citing the “risk-based approach,” but the problem isn’t insufficient validation; the problem is that these records aren’t attributable to their executors or to the activities they’re referencing and are not provably contemporaneous with activities they’re documenting. Consider these use cases:
Case #1: A study team captures approvals via email on key documents so they’re not slowed down by circulating paper. The emails are attributable to the approvers, but the approval is not attributable to the version approved unless the final version is attached to the email retained in its native format. Because the approvers are typically reviewing the penultimate version – just prior to final cleanup – the final version is rarely attached. Without a signature on the final version, it’s impossible to prove that the approver laid eyes on the final changes.
Case #2: A study team maintains a log of testing and validation activities for edit checks using an Excel tracker to demonstrate that they are testing each edit check according to their SOPs. The data management team divides up the edit checks to execute them, and then different users enter their initials, the pass/fail results, and any follow-up for each check into the spreadsheet, which is stored on a shared drive.
Again, the initials entered by different users are neither attributable (there’s nothing to prevent you from entering my initials, or vice-versa) nor demonstrably contemporaneous (there is no audit trail that time-stamps each entry).
Case #3: A large group of CRAs is trained during a remote meeting. The Clinical Project Manager captures the list of attendees in the meeting minutes as documentation of training. As with the example above, this method lacks attribution and includes no evidence that it was captured contemporaneously with the event.
These use cases raise the following question: How attributable and contemporaneous do clinical trial records need to be? For example, we don’t require every meeting attendee to sign and date each copy of meeting minutes as proof of attendance, but if the meeting includes training we usually require trainees to sign. Most sponsors require approval signatures for clinical monitoring plans, but not for pharmacy manuals or laboratory manuals. Master ICFs are usually formally approved, but advertisements aren’t.
It’s interesting to consider where we draw the line and how we can derive business rules from our experience to capture compliant documentation without drowning ourselves in signatures.