As developers and vendors of Electronic Data Capture systems, at times, we were arrogant.
We had this rosy picture that if we created systems that achieved cleaner data faster, that the world of Clinical Research would change.
15 years after the first internet based EDC systems emerged, this major seachange in drug development has not yet occurred.
So why is that? What did we get wrong? Why are drug companies not already seeing a significant ROI with EDC?
There are many reasons, but one in particular reason, I believe that is sharp in my mind at the moment:
A lack of measurable quality
I am not saying a lack of quality. What I am saying here is a lack of measurable quality. This is a critical factor.
The question is – What is good enough when it comes to data quality, and how do I know I have achieved that?
Imagine if I was a Pharmaceutical company executive, and I was approach by a Senior Manager asking that all manual data cleaning activity be stopped…
Senior Manager – “We have introduced an EDC system, and the EDC system checks 100% of the data.. there is no need for manual data review and cleaning… I would like to provide instructions that manual data cleaning is not required. Can I have your approval”
Pharmaceutical Executive responds “Have you any proof”?
Senior Manager replies “Ah… no, but we believe that the data should be of sufficiently high quality as we are using 100% edit checking to assure consistency “…
Pharmaceutical Executive ” ‘Should be’ you say..? Sounds very risky to me.. what data to you have to prove we will not be impacted?”
Senior Manager replies “Mmmm… not sure.. “
Pharmaceutical Executive… “Ok.. prove to me that with no manual data cleaning… that no risk exists of resulting submission, efficacy or safety failures… I don’t want this to come back to bite us without understanding the implications!”
Senior Manager then goes away…
So, how would this Senior Manager prove that the quality of safety and efficacy data was not significantly negatively impacted with such a strategy? This has proven to be difficult.
One measure, is to examine the proportionate change in data as a result of a manual review or query. Unfortunately, most EDC systems don’t make it very easy to assess that. When data is changed… was that due to a manual review finding?… or was that due to something else?
Let me give you a good example of a failed methodology. A large pharma company – who I will not name to avoid blushes – maintained a policy for many years (and may still do) of re-checking data that was delivered by their Data Management group, in SAS. It was claimed that the quality of data that was delivered by CDM and EDC systems could not be fully trusted, and therefore it was necessary to re-run SAS routines across the data to confirm that the quality was indeed up to scratch… To counter this, a Senior Manager at this said Pharma prove that an extremely small % of data changes were occurring as a result of this re-checking, and, that the impact of these changes on the statistical (and safety) outcome of the studies was zero.
However – the argument was lost because it could not be reliably predicted across studies how the measure of quality might be impacted.
Yes. I think solutions do exist.
Considering the lack of structure in the underlying datasets, Medidata have done a good job of extracting valuable statistical information from the 1,000’s of studies they have run on the Rave platform. This has provide indicators that prove the small % (less 3%) of data is in-fact revised following initial data entry. This sort of information is better than most. Oracle, the other leading eClinical vendor face similar challenges. It is not possible to ‘see’ on a rolling basis the effect of the data cleaning exercise on the proportion of data cleaning carried out.
The question is, why do EDC systems not deliver measures of data quality out of the box? Why do complex offline batch custom reports need to be developed that only provide indicators of data quality?
I think we will see in the next round of eClinical solutions a better appreciation of the need for measurable quality and related to this far better real-world workflow support.