- Number of subjects at each site, calculated by counting the number of values in that site's dataset. For example, Site A has 20 subjects, or 20 values for adverse events.
- Lower Quartile (LQ), the point under which 25% of the data points for the site fall when arranged from smallest to largest. The Excel formula for this is =QUARTILE([range],1). So if our values for Site A are in the range A1 to AN1, the formula is =QUARTILE(A1:AN1),1).
- Upper Quartile - Lower Quartile
- Upper Quartile (UQ), the point above which 25% of the data points for the site fall when arranged from smallest to largest, with the formula =QUARTILE (A2:A21),3).
- Min, the minimum value for our the site. For Site A, this is 1, calculated =MIN(A1:AN1).
- Max, the maximum value for site. For Site A, 50, of =MAX(A1:AN1).
- Median, or the value that falls in the middle of all the values in the dataset, or the average of the two middle values, if there is an even number of values. This is calculated =MEDIAN(A1:AN1).
- Median - Min
- Max - Median
Ready Room Blog
One of These Things is Not Like the Others
Listen to article
Audio generated by DropInBlog's Blog Voice AI™ may have slight pronunciation nuances. Learn more
One particularly challenging aspect about using metrics in clinical research is to figure out what you're looking for. With operational metrics, where you're comparing progress against plan, it's easy: the plan is your target. For quality metrics, particularly those that may signal issues with protocol compliance, it's not so straightforward. How many protocol deviations should a site have? How many serious adverse events indicates a problem? We can set targets for these measures, but how are we deriving them?
Where we don't have a clear target or threshold for a particular measure, we can learn a lot by comparing measurements from site to site, using the mean and standard deviation across all sites to identify sites that are outliers. A site that is very different from other sites may be doing so by chance, but they also may be interpreting the protocol differently than other sites - or even falsifying data.
First, we decide on the measures we want to compare. Usually, we're looking at measures that have some subjective component; measures that should not have a subjective component but might be subject to manipulation; and measures that might indicate a quality issue. In the first category, we are interested in measures that are subject to reporting, such as numbers of protocol deviations, important protocol deviations, adverse events, or serious adverse events. In the second category, we might look at key efficacy measures specific to the protocol. In the last category, we might consider protocol deviations that are evident via the data, such as numbers of out-of-window visits or missed procedures.
First, we compile our dataset. For example, if we're looking at the numbers of adverse events reported per patient, and we have a study with six sites, our dataset might look like this:
For example, Site A has 20 subjects. One subject has one adverse event; another subject has two adverse events; four different subjects have four adverse events each; one subject has 50 events, and so on. We've arranged the subjects in ascending order of number of events for ease of analysis.
Now, we need to calculate the distribution of values for each site. To do this, we calculate each of the following:
Proven inspection management for the life Sciences industry
Biotech, pharmaceutical, medical device, CMOs, CROs, and laboratories big and small are getting ready with Ready Room.