Ethicists Seek to Improve Risk/Benefit Estimates in Translational Trials
By Deborah Borfitz
March 21, 2011 | Two ethics experts posit that drug developers, research funding agencies, and would-be trial participants could all make more informed decisions about clinical drug testing if pre-clinical studies routinely controlled for bias, and if risk/benefit estimates in new drug trials tempered enthusiasm about a compound with past findings about similar interventions. The all-too-familiar scenario is that researchers announce the discovery of a new drug that eradicates disease in animals and then, a few years later, the drug bombs in human trials, says Alex John London, associate professor of philosophy at Carnegie Mellon University (CMU).
London, who also directs CMU’s Center for Ethics and Policy, co-wrote an essay on predicting harms and benefits in translational trials that appears in the latest issue of the journal PLoS Medicine. The lead author is Jonathan Kimmelman, associate professor at McGill University’s biomedical ethics unit and department of social studies of medicine. The project was funded by a grant from the Canadian Institute of Health Research.
Three ethical concerns motivated the pair to write the paper. One is that the informed consent process may be hampered by unrealistic expectations of therapeutic outcomes on the part of clinical trial participants, say London. Factoring in past failures of like compounds would effectively re-define the benefit of participation from finding relief or a cure to altruistically helping advance medical knowledge.
Secondly, institutional review boards (IRBs) are charged with ensuring study risks are reasonable in proportion to study benefits, continues London. “The more accurate and realistic [researchers] can be about likely results, the more [IRBs] are able to make a conscientious risk assessment.” Finally, it’s imperative that clinical studies have scientific value because of the scarcity of resources—participants as well as funding—within the research enterprise.
Eli Lilly’s Alzheimer candidate semagacestet is but one in a string of recurrent failures in clinical translation, London and Kimmelman report. In cancer, only 5% of highly promising products ever get clinically translated. Neuroprotective stroke treatments have also consistently failed randomized trials.
The current pattern of “boom and bust” drug development may be related to the way researchers predict clinical outcomes of their work, says London. Preclinical studies of semegacestat have yet to be published, although the drug was first tested in human beings more than five years ago. As reported in the PLoS Medicine article, eight other anti-amyloid drugs have failed randomized trials or been abandoned. A more accurate risk/benefit calculation might have been made had the previous failed attempts to intercept the same disease pathway factored into corporate decision making.
Little if any scientific literature speaks to the importance of setting expectations for a drug in the context of how past studies with drugs in the same reference class has fared, or how to do that. But drug developers might logically do so based on an intervention’s targeted pathological process, the authors argue. Possible methods for correcting myopic interpretation of pre-clinical results cannot be identified, let alone assessed and improved upon, without open discussion on the matter. Current policies of the American Society of Clinical Oncology and the International Council on Harmonization, among others, reflect a tendency to rely on an overly narrow pre-clinical evidence base that the authors term “evidential conservatism.”
Another concern is “variation in who uses which methods [such as randomization and blind testing] to reduce bias,” says London. Treatments may also get applied in significantly different ways in clinical and preclinical studies. Preclinical studies might further neglect to test the robustness of cause-and-effect relationships, such as how well an agent performs in rodents versus higher order species.
How data from animal studies gets reported may not be standardized even within individual research organizations, says London. Professional guidelines, if uniformly adopted in the pre-clinical phase, would improve study reporting and design for the “next link in the chain of investigation.”
Kimmelman says policies might be considered to reduce bias in the evidence base upon which trials are designed and approved. Specifically:
● Journal editors could mandate the use of reporting guidelines for pre-clinical research as a condition of publication. The recently proposed Animals in Research: Reporting In Vivo Experiments Guidelines (ARRIVE) is a good starting point, says Kimmelman.
● Prospective registration of pre-clinical research could be required as it now is for clinical studies, ensuring all results—bad as well as good—get published.
● IRBs could require investigators to do a systematic review of any drug being used for the first time in humans to assess how similar agents have fared in translational studies, and publish the information in trial brochures.
● The Food and Drug Administration could “pay more careful attention to claims about clinical promise in early phase research,” continues Kimmelman, so drug candidates might get more vigilantly vetted.
● Trial sponsors and investigators could develop therapeutically specific research practice guidelines. Stroke research is “light years” ahead of other research areas, notably oncology, because of efforts by the Stroke Academic Industry Roundtable (STAIR) to address threats to validating pre-clinical research, says Kimmelman. “Disappointingly, some of the first drug candidates that met STAIR criteria have not translated,” he adds.
Pre-clinical researchers seem receptive to the notion of practice guidelines, Kimmelman says. “They’re as frustrated as the rest of us that results [of their work] don’t make it to the bedside.”
Under a second, three-year grant from the Canadian Institute of Health Research, Kimmelman and London are now examining a large cohort of drugs introduced in human trials to determine the relationship between practices at the pre-clinical level and outcomes of translational research.
March 21, 2011 | Two ethics experts posit that drug developers, research funding agencies, and would-be trial participants could all make more informed decisions about clinical drug testing if pre-clinical studies routinely controlled for bias, and if risk/benefit estimates in new drug trials tempered enthusiasm about a compound with past findings about similar interventions. The all-too-familiar scenario is that researchers announce the discovery of a new drug that eradicates disease in animals and then, a few years later, the drug bombs in human trials, says Alex John London, associate professor of philosophy at Carnegie Mellon University (CMU).
Alex John London |
Three ethical concerns motivated the pair to write the paper. One is that the informed consent process may be hampered by unrealistic expectations of therapeutic outcomes on the part of clinical trial participants, say London. Factoring in past failures of like compounds would effectively re-define the benefit of participation from finding relief or a cure to altruistically helping advance medical knowledge.
Secondly, institutional review boards (IRBs) are charged with ensuring study risks are reasonable in proportion to study benefits, continues London. “The more accurate and realistic [researchers] can be about likely results, the more [IRBs] are able to make a conscientious risk assessment.” Finally, it’s imperative that clinical studies have scientific value because of the scarcity of resources—participants as well as funding—within the research enterprise.
Eli Lilly’s Alzheimer candidate semagacestet is but one in a string of recurrent failures in clinical translation, London and Kimmelman report. In cancer, only 5% of highly promising products ever get clinically translated. Neuroprotective stroke treatments have also consistently failed randomized trials.
The current pattern of “boom and bust” drug development may be related to the way researchers predict clinical outcomes of their work, says London. Preclinical studies of semegacestat have yet to be published, although the drug was first tested in human beings more than five years ago. As reported in the PLoS Medicine article, eight other anti-amyloid drugs have failed randomized trials or been abandoned. A more accurate risk/benefit calculation might have been made had the previous failed attempts to intercept the same disease pathway factored into corporate decision making.
Little if any scientific literature speaks to the importance of setting expectations for a drug in the context of how past studies with drugs in the same reference class has fared, or how to do that. But drug developers might logically do so based on an intervention’s targeted pathological process, the authors argue. Possible methods for correcting myopic interpretation of pre-clinical results cannot be identified, let alone assessed and improved upon, without open discussion on the matter. Current policies of the American Society of Clinical Oncology and the International Council on Harmonization, among others, reflect a tendency to rely on an overly narrow pre-clinical evidence base that the authors term “evidential conservatism.”
Another concern is “variation in who uses which methods [such as randomization and blind testing] to reduce bias,” says London. Treatments may also get applied in significantly different ways in clinical and preclinical studies. Preclinical studies might further neglect to test the robustness of cause-and-effect relationships, such as how well an agent performs in rodents versus higher order species.
How data from animal studies gets reported may not be standardized even within individual research organizations, says London. Professional guidelines, if uniformly adopted in the pre-clinical phase, would improve study reporting and design for the “next link in the chain of investigation.”
Kimmelman says policies might be considered to reduce bias in the evidence base upon which trials are designed and approved. Specifically:
● Journal editors could mandate the use of reporting guidelines for pre-clinical research as a condition of publication. The recently proposed Animals in Research: Reporting In Vivo Experiments Guidelines (ARRIVE) is a good starting point, says Kimmelman.
● Prospective registration of pre-clinical research could be required as it now is for clinical studies, ensuring all results—bad as well as good—get published.
● IRBs could require investigators to do a systematic review of any drug being used for the first time in humans to assess how similar agents have fared in translational studies, and publish the information in trial brochures.
● The Food and Drug Administration could “pay more careful attention to claims about clinical promise in early phase research,” continues Kimmelman, so drug candidates might get more vigilantly vetted.
● Trial sponsors and investigators could develop therapeutically specific research practice guidelines. Stroke research is “light years” ahead of other research areas, notably oncology, because of efforts by the Stroke Academic Industry Roundtable (STAIR) to address threats to validating pre-clinical research, says Kimmelman. “Disappointingly, some of the first drug candidates that met STAIR criteria have not translated,” he adds.
Pre-clinical researchers seem receptive to the notion of practice guidelines, Kimmelman says. “They’re as frustrated as the rest of us that results [of their work] don’t make it to the bedside.”
Under a second, three-year grant from the Canadian Institute of Health Research, Kimmelman and London are now examining a large cohort of drugs introduced in human trials to determine the relationship between practices at the pre-clinical level and outcomes of translational research.