PRO Validation – Endless Spiral?

Patient Reported Outcomes (PROs) have become a focus area for Clinical Research and Development in recent years. It only makes sense. So many symptoms are really best known by the patients themselves and in many cases (such as with pain), is the only reliable source for measuring the severity of that symptom.

The FDA’s clear endorsement of the use of PROs in their 2009 guidance document, things have really picked up for PROs as well as ePROs. ePRO is often the best way to provide the FDA with the evidence that “The data was collected according to the protocol”, e.g. reliable date-and-time stamps.

The FDA clearly had good intentions when they published their expectations for PROs and ePROs in this guidance. 3 years on, and it has become quite clear that there are a lot of unanswered questions around this topic. For example, the FDA guidance was very specifically for Patient Reported Outcomes, but what about instruments that are clinician-rated (ClinRO), or completed by the patients caregiver (ObsRO)? The guidance specifically states that a PRO is a report of the patient’s health condition that comes directly from the patient, e.g. excluding ClinROs and ObsROs. Since then, there has been statements that in fact they do expect similar requirements to be met for the other types of outcome assessments and all these have now been labeled as “Clinical Outcomes Assessments”, so you can expect to see a new acronym ‘eCOA’ replacing the well-established, but too limited ‘ePRO’ in the near future.

The other big open question from the FDA guidance was instrument migration from paper to electronic. The guidance very helpfully states that “Additional qualitative work may be adequate, depending on the type of modification made”. It is understandably difficult to make very specific recommendations in these guidance papers as there are quite many different situations to account for so the published statements end up being very generic and not always very helpful when planning for a migration. Thankfully, the ISPOR Good Research Task Force came to rescue and published further recommendations regarding instrument migration and actually categorized the different types of changes and what kind of qualitative work is recommended.

The ISPOR paper is a good effort to add clarity and has helped a lot in establishing working practices within the eCOA space. However, there are still many open questions and this vagueness in recommendations and guidance documents is a difficult issue. The clinical research industry is very conservative by nature and when presented with a vague recommendation, the industry tends to lean towards the extreme conservatism to avoid any risk. This then spirals into more open questions and even more conservative approaches and quite often into working practices that are very expensive, time consuming and do not add value into the process or in some cases, have even a negative impact on the study! Over time, people will start to think that these working practices are what the FDA is expecting, when all they have really said about the topic is that “Additional qualitative work may be adequate…”.  We have seen similar assumptions becoming standard practice with Source Data Verification in EDC studies.

So what are the key topics here? Well here’s some from my list:

A) The ISPOR paper recommends usability testing and cognitive debriefing for instruments that go through ‘minor’ modification during migration. What do you actually need to test and how? There is very little methodological guidance regarding how to do this type of testing.

You need to assess that the content validity has not changed during the migration – but how can you do this if you’re not the instrument developer as you don’t really know what the content validity was in the first place. How do you know if the subjects really understand and respond to the questions as intended. What if they don’t understand the questions, but you can’t change the original instrument and in fact, you’re really just testing for the effect of the migration and usually assume the original instrument is fit for purpose…

B) If you go through all this testing on one eCOA device / technology / platform, is that evidence of any use if you later on use a different device / technology / platform. Let’s say that you do usability testing on a handheld PDA device, but then later on want to use the same instrument in a tablet. There is no guidance for electronic to electronic migration. What if you want to use two different eCOA vendors?

Our initial recommendation here, in the lack of further regulatory guidance on the topic, is to define a risk- based approach supported by rationale for your methods.  Write a clear qualitative research protocol and state clearly what you are going to test and think about how this will allow you to deploy those instruments in the future. For example, you can test more than one device and see if there is a difference.

 

C) Does any of this really make much sense? I’ve never heard of a usability / cognitive debriefing study actually failing, or an equivalency study where the results did not meet the levels recommended in the ISPOR guidance. If subjects actually have comments about the COAs being tested, we’re often unable to change them because the originals are so well established and developed a long time ago. I’m not sure I see the value in doing all this, just to satisfy all the expectations we think the FDA has about this. Isn’t there already enough evidence out there that with most of the standard types of migrations, the content validity is equivalent with the paper version.

There are, however, many instances where I think this type of testing can be really valuable. For example, some instruments really do change a lot when migrated to electronic platforms, such as event-driven diaries where the patients need to make entries often several times a day after a certain event happened, overactive bladder diaries being a prime example. There are practically an infinite number of different ways to implement this type of a diary in an electronic platform and some of the implementations surely perform better than others.

 

Scenarios like this can be tested for usability and patient preferences and the eDiary should be tested as a whole ‘unit’ and not only focus on the ‘instrument’ part of it.This type of assessment is often not done, because we are so concerned about the regulators that we tend to forget the real end user – the patient.

 

 

Personally, I’m very interested in usability research and usability testing – when it’s done with the intent of making eCOA data collection better. I have been involved in many projects in the past and every time, I have learned something unexpectedly simply by watching patients use the tool and ask questions. There is a lot of good work being done by the C-Path institute’s ePRO consortium as well as some ISPOR task force groups and I hope this will result in some new recommendations that will add clarity and value into the instrument migration process in the near future.

Leave a Comment