Get the latest inspection trends and ideas right in your inbox.

All Resources

EMA Guideline on Computerized Systems and Electronic Data in Clinical Trials: Security

Next in our series on EMA’s Guideline on Computerized Systems and Electronic Data in Clinical Trials, we look at Annex 4 of the guideline, which recommends security provisions for clinical systems.

What’s striking here is the amount of practical advice for security standards.  Unlike 21 CFR Part 11, which deals with security only in terms of access control, this guideline describes detailed physical, virtual, and infrastructure controls.  These details may become obsolete as the environment changes, but in the meantime they provide a useful checklist that an auditor could follow when assessing a system used for clinical trials and the environment on which it is installed. Below are some key points:

Data centers should be structured and maintained to avoid natural disasters such as floods, fire, and pests, as well as physical incursions.  Accordingly, controls should include two-factor authentication for physical access; fire detection and suppression systems; uninterruptible power supply; emergency generators; and redundant connections to the internet.

The takeaway here is don’t run your own data center. Unless you’re a fortune 500 company, you can’t afford it and you don’t have the expertise. Implementing the very sound advice given above is a multi-million dollar exercise. And that’s just the security part. The very expensive and complicated compute, storage, and networking gear also needs to be acquired, racked, maintained, and retired. It’s quite the task.

Instead, you should prioritize leasing compute resources from either colocation centers (colos) or cloud providers such as Amazon Web Services (AWS) or Google Cloud Platform (GCP). All of whom implement the physical security practices called out in the EMA’s guidelines. 

Data should be replicated to a secondary data center that is physically distant enough from the primary data center to avoid being affected by the same regional disaster.

Again, very good advice. But also again, simply not feasible for any but the most deep-pocketed firms. Instead, consider leveraging the redundancy already built into cloud services. Amazon and Google, for instance, not only maintains data centers in multiple regions around the world, but each region supports multiple “availability zones” (AZs), which are discrete data centers within a region that have redundant power, networking, and connectivity. Regional AZs are interconnected with high-bandwidth, low-latency networking, over fully redundant, dedicated fiber to make data transfer fast and reliable.

Since the EMA advice is specifically about replicating data, we should also mention the object storage services that the cloud providers offer. These object stores (e.g. AWS S3 and Google Cloud Storage) are massively redundant storage systems that are mostly notable for their durability, where durability is defined as the likelihood of data loss. Both AWS and Google tout 99.999999999% durability of objects over a given year. Those eleven nines equate to storing a billion files for a hundred years without losing a single one! Put another way, store your backups on S3 or GCS and sleep easy.

In order to provide a barrier between a trusted internal network and an untrusted external network … firewalls should be configured as “strict as practically feasible” to prevent traffic from unauthorized IP addresses, destinations, protocols, and ports.

Once more, some very sound advice from the EMA. Even if you have outsourced your data center operations to a cloud provider or colo, you may still have a private office network that needs protecting. You may even have extended your on-premise network to include your cloud presence. This private network may be housing some simple file servers or SharePoint instances, or it may encompass even more critical databases and servers. Since this office network is connected to the internet, it is essential that you configure your networking infrastructure to keep the bad guys out while allowing the good guys in.

It’s important to realize, however, that this notion of a walled-off network with implicitly trusted users on one side and everyone else on the other, while extremely common, is falling out of favor. The problem is that no matter how hard you try, perimeters can and will be breached. If the bad guy is inside your “secure” network and thus granted some level of trust, it’s game over. Similarly, if a good guy is outside your network (working from home, at the airport, etc.) and can’t get into the private network, well, that’s no good either.

Today, more companies are supplementing or replacing private networks with a “zero trust” security model. According to CrowdStrike, “Zero Trust is a security framework requiring all users, whether in or outside the organization’s network, to be authenticated, authorized, and continuously validated for security configuration and posture before being granted or keeping access to applications and data. Zero Trust assumes that there is no traditional network edge; networks can be local, in the cloud, or a combination or hybrid with resources anywhere as well as workers in any location.”

It’s a big topic, but relative to the EMA guidance, it’s important that you recognize that firewalls and VPN concentrators are not the be all and end all of network security. You should also be moving towards a zero trust model, especially as your workloads are moving to the cloud and your employees are moving out of state.

Security patches for platforms and operating system software should be applied in a timely manner.

Software is complex. If you start at the layer just above the CPU, that is, at the operating system level, and work your way up through various programming languages, application servers, standard (and non-standard) libraries, business code, network infrastructure, mobile phones, browsers, and so on, you will have traveled through literally millions of lines of code.

Within these millions of lines of code, thousands of bugs lie latent, and every bug is a potential security hole. As vulnerabilities are discovered, patches are made available, updates are released, and the vulnerability is made public. Users of these systems and libraries are expected to apply the patch or upgrade their system as soon as possible. Not doing so is the same as leaving the window open next to a locked door.

Sadly, this is a practice more honour’d in the breach than the observance. As infamously demonstrated by Equifax several years ago when they failed to upgrade a popular Java application development platform that they were using for their web portal. That attack (probably by Chinese state-sponsored hackers) was made worse by the fact that, as described above, once inside the Equifax network, the attackers were considered trusted and were granted an excess amount of privilege.

The use of USB sticks and other “bi-directional devices,” which could introduce malware, should be controlled. 

This is the IT equivalent of practicing safe sex. Do not insert into your laptop any USB sticks, DVDs, mobile phones, or other media or devices not explicitly known to be safe. Doing so is enough to trigger code stored on that medium to run on your computer, meaning, once again, it’s game over.

I assume the EMA called this out for two reasons: one, the “USB drop attack” is an extremely common attack vector and, two, far too many people are prone to inserting random gadgets into their laptops. In fact, it was just such an attack, likely launched by Israel and the United States, which caused one fifth of Iran’s uranium enrichment centrifuges to rip themselves apart in 2019.

Logical controls such as anti-virus software, intrusion detection and prevention software, and internal activity monitoring should be employed.

This advice is of a piece with the firewall advice given above. What the EMA is saying here is that it’s not enough to wall the bad guys off, you also need to be able to tell when they have inevitably breach your network or compromised your zero trust systems, and be able to mitigate and minimize the damage.

Authentication methods should be commensurate with risk, with two-factor authentication recommended.  The two factors should include two out of three of the following:  Something you know (e.g., password); something you have (e.g., mobile phone with authentication software); something you are (e.g., biometric measure).

Authentication is tricky. Make it too easy and you’re inviting trouble. Make it too hard and you diminish the value of the system. But authentication is also essential, indeed it’s the principal component of zero trust security. As we know, you can’t allow users to have a password of, say, “password” or “abc123,” but even complex passwords, e.g. “AX-10k@llg05,” can be compromised or simply forgotten or mistyped or written down on a Post-It Note.

What the EMA is advocating for is supplementing (or replacing, see next) password based authentication with another factor, for instance, a thumbprint or a cryptographically strong key generated at the time of use. This second factor significantly improves security without overly impacting ease of use.

Companies should have policies for password length, complexity, expiry, login attempts before lockout, and re-set.

This is where the EMA and Synclinical part ways. It’s not that this is bad advice, but it’s not wholly good either. If you’re going to have passwords at all, there is one password management policy that’s more important than all of the others, and that’s maximizing entropy. Putting any effort into password policies that don’t increase the level of entropy (randomness – guessability) is wasteful.

Fortunately, the EMA guidance also encourages the use of enterprise password managers, such as Dashlane or 1Password, which is very good advice. Let the password manager generate high-entropy passwords and keep them secure. Much better than allowing users to make up and manage their own.

It must be noted, however, that passwords of any kind have overstayed their welcome. Given the challenges associated with using them safely, it would be nice to move towards a future that’s password free. Which is exactly what Google, Microsoft, Apple, and others are very actively doing with Passkeys. Passkeys are a welcome addition to security best practices, and the members of the FIDO Alliance that is managing the effort are moving very quickly to provide the underlying infrastructure.

In short, so long as you are using passwords, yes, make sure they are of maximum entropy, easy to use, and well managed, but begin work now towards a password-less future. And please, in the name of all that’s good and holy, do not implement automatic password expiration.

Connections over the internet should take place over a secure and encrypted protocol, such as a Virtual Private Network or secure https connection.

I suppose there’s still somebody somewhere who is not encrypting communications, but if so, they are a rare breed indeed. This very sound advice is just the EMA repeating a policy that is already very well practiced: Encrypt everything everywhere. It’s puzzling, though, that the guidance singles out communication (data in transit) and doesn’t mention encrypting storage (data at rest). Nevertheless, you should totally do that too.

Related Posts