Convergence and Big Data: Conversations from the Post-Approval Summit

May 20, 2013Along with the Massachusetts Eye and Ear Infirmary, Quintiles Outcome presented the 2013 Post-Approval Summit at Harvard Medical School, May 7-8, 2013, that focused on strategies and best practices for demonstrating and improving the safety, effectiveness, value and quality of healthcare products and services through Phase IV studies, Patient Registries, Risk Management Programs and Quality Initiatives. 
 
Dr. Richard Gliklich is President of the Quintiles Outcome Division of Quintiles, the division created when Outcome Research was acquired by Quintiles in late-2011, combined with the existing Quintiles late-phase research group, that later acquired more than 60 million electronic health record assets. Gliklich and his team focus on real-world and late-phase research, using three key modalities—interventional, research prospective observational, and retrospective data.  
 
Gliklich spoke with editor Allison Proffit, and gave Clinical Informatics News a summary of conversations and issues raised at the event.  
 
CIN: How are the different stakeholders working together now? 
 
Gliklich: There’s been a kind of a convergence of stakeholder needs of multiple parties, so in the past, we used to look at safety being the province of the regulator and effectiveness being the province of the payor and the provider and eventually the patient. What we see now is that these different stakeholders for health information and health evidence are expanding their view. 
 
For example, Janet Woodcock (Food and Drug Administration) and Stella Blackburn (European Medicines Agency) and Danica Marinac-Dabic (FDA) from regulatory agencies were talking about the need to start looking at effectiveness, basically the benefit component of the benefit risk ratio and how that will become increasingly important as products are marketed and we get better and better understanding safety signals. So we saw that kind of crossover.
 
Similarly, the payors are seeing costs associated with adverse events and so on. They have an interest in safety, so the idea that one group is more or less interested in one area or another is blurring. We’re also seeing other stakeholders in the mix, obviously the providers and the patients. We’re interested in both. And then an increasing interest across the board and internationally in value. 
 
What are some of the questions each stakeholder needs to ask? 
 
How can we be more comprehensive in gathering evidence that meets multiple stakeholder needs, and also multiple needs within each stakeholder? [What] new areas haven’t we rigorously looked at before, such as quality of life, and much longer outcomes information and how does that now tie into a broader learning healthcare system?  Where in the US, for example, will providers be at risk for outcomes with accountable care organizations, so they’ll have a renewed interest in this as much as payors? The entities that are regulating drugs and devices need to continuously review what kind of information is coming out, because we’ll have more and more ability to do that. 
 
What else did you address? 
 
We had the discussion from the perspective of the different methodologists, if you will, who were presenting the view of how best to collect safety information, and then how we’re collecting safety information in new ways now, as well as effectiveness information. We kind of looked at it through three lenses. We looked at it through the traditional randomized trial and the importance of that. Then we heard from Steve Nissen (Cleveland Clinic) who talked about using trials for evaluating safety information and how critical that is and how important it is to design those trials well. [He discussed] the impact and wasted dollars when a number of studies that he cited were not well-designed, lacked good comparators, or the appropriate comparators. For example, if you’re comparing drug A to drug B and you’re using drug A at 40 mg a day and using drug B at 20 mg a day, what kind of comparison are you really getting? So being smarter about comparisons and then also being smarter about the concept of intent to treat evaluation versus what he called ‘modified intent to treat’ that he thought were an issue.
 
Can you tell us about some of the conversations are trial trends? 
 
[We discussed] how to bridge that randomized trial that’s purist and move it into the real world. Sean Tunis (Center for Medical Technology Policy) talked about practical trials and or large simple trials, there’re a number of different terms for that, and how those might come about. He talked about some of the efforts underway to develop infrastructure for that. There’re some coalitions now, and NIH has a “co-laboratory” that’s looking at that. 
 
And then we turned to another group of methods, observational research methods, particularly prospective observational research, focused on registries and Elise Berliner from the Agency of Healthcare Research and Quality, talked about the efforts by the agency to standardize methods and practice in registries and other observational studies through the AHRQ Handbook Registries for Evaluating Patient Outcomes, the User’s Guide… She also talked about patient registries, which have what’s called the Outcome Measures Framework, an approach to try to get groups to use the same definitions and the same outcomes and the same time periods so that data can be aggregated and compared more easily across multiple efforts internationally.
 
Tell us a bit about the conversations around big data and EHRs.  
 
The challenge is not getting to big numbers, the challenge is to getting to methods that will reduce confounding or bias. And but there are strong methods coming about, high density/propensity scoring, where you really can pull together a whole number of covariates and really get much, much closer to a convincing, not true cause and effect, but hypotheses that could be tested in that paradigm for some things. We’re not quite there yet, but the argument was that the data is really showing that we’re making leaps and bounds. And we’re almost there. It’s not 10 or 15 years off. 
 
We heard from Sebastian Schneeweiss from Harvard who talked about electronic health record and other health system data from using some new methods to try to address confounding, both for safety and effectiveness, and potentially for value in the future, and talked about how that field has progressed and why that’s valuable… We need to move to be more comprehensive because one method may not give you all the information. Combining methods may not only give you more information and a better view, but may be less expensive. So embedding a trial within a registry or doing a study that combines existing electronic health record data and then follows patients forward in time, again, could lower the cost while still presenting a more timely and complete view of the results for multiple stakeholders. 
 
Robert Cuddihy from Sanofi presented a program that is taking a network of practices that are based on specific EHR systems and then bringing them in as a network to follow diabetes, where the data is kind of passively collected in the EHR, patients are directly invited to participate and then follow directly, coupled with their electronic health record data, as they give permission. Physicians get reports back but don’t have a lot of other work to do and the patients are followed forward and also given some information. This is kind of a new model to do research and practice: leveraging electronic health records, but combining both retrospective data with the ability to collect prospective quality of life information directly from the patients.
 
So data management and types of data were big issues? 
 
We had a panel on the need for new and comprehensive models for evidence development, from different perspectives. Chris Paul-Milne (Tufts Center for the Study of Drug Development) talked about how the types of information being collected through studies and trials is evolving and we’re seeing more mix of trials with observational research approaches. He talked a little bit about the costs of the trials and why it’s going to be important to get to a better economic mix of study programs to do—to be efficient. 
 
We heard from Mira Pavlovic from EUnetHTA which is kind of the European, Trans-European Health Technology Assessment group about their evolving thoughts about how to get more comprehensive about evidence and also how to interpret different evidence sources for its importance in making valued decisions for within Europe. And we heard from Sachin Jain, who’s now with Merck, as the CMIO at Merck, who talked about trying to identify partnerships for their evidence development and kind of relying on external groups that are more forward-thinking and capable and not necessarily just trying to build everything within Merck and how those partnerships were going.
 
What real pain points were identified? 
 
Unlike a claims dataset which is complete, [with] an EHR dataset, you only know what you know about what you received and you know nothing about what’s not there. You don’t know if it wasn’t ordered, you don’t know if it wasn’t entered, you just don’t know. And that missing data issue is one that continues to need a lot of work, both from the methodological perspective, but more so from getting to standardization in what kind of information is routinely completely captured in electronic health records. So that’s more of a social movement. We need to get to some sort of minimal dataset concept in electronic health records that people will complete, physicians will complete more fully for some minimal dataset to do, to be able to study certain things well. 
 
Was there anything that sort of came up in the conversation that you weren’t expecting as much focus or emphasis on? 
 
Well, I think the audience was surprised to hear how interested the regulators are in benefit and effectiveness information. I think that was a surprise. And it’s clearly been going on for awhile and even as people talk about progressive licensure, or conditional licensure, the thought comes up. But to hear Janet Woodcock or Stella Blackburn specifically say, we really are trying to get a handle on that. We have to understand the benefit to know where the benefit continues to outweigh the risk. Now part of that falls on the manufacturer to be able to demonstrate benefit and that kind of message came across that it’s important from a regulatory perspective to maintain your licensure potentially and continue to collect benefit information and they become more important over time, but also just the whole philosophy of the regulators kind of viewing their role as not just evaluating harm, but really locking in on this benefit-to-risk ratio. That was kind of one thing I think popped out.
 
We had a great talk by Jens Grueger (VP, Head of Global Pricing & Market Access, Roche) on changing the drug development and commercialization paradigm. And he really laid out that pharma is looking at the full stakeholder panel early in development and trying to make decisions: de novo decisions, investment decisions that are taking in account much more of their ability to reach multiple stakeholders and bring evidence to multiple stakeholders that will have an impact on the ultimate success of their product and thinking about that earlier and earlier and earlier.