Researchers Find Uses for Open Access Trial Databases

By Allison Proffitt

March 23, 2015 | Clinical trials made publicly available are not being used to validate primary results, researchers from the Duke Clinical Research Institute have found. In three open access clinical trial databases, only 15.5% of the public data have been requested, and research proposals focused on epidemiological studies, subgroup analyses, analyses of the disease state, or predictors of treatment response, instead of confirmatory studies.

DCRI published their findings yesterday in the Journal of the American Medical Association (JAMA). (doi:10.1001/jama.2016.2374)

The findings are not quite what the researchers expected, Ann Marie Navar, a fellow at the Duke University School of Medicine and one of the co-authors of the study, told Clinical Informatics News.

“There’s been a call in the scientific community for several years for improved access to clinical trials data,” Navar said. And since 2013, “there have been collaborations between academia and industry to create platforms to share data.”  Within three platforms, large numbers of datasets have been made available. But, “validation studies just haven’t been done,” she lamented.

More than 3,000 clinical trials are available through three open access platforms that the DCRI team considered: ClinicalStudyDataRequest.com, the Yale University Open Data Access Project (YODA), and the Supporting Open Access for Researchers (SOAR) initiative. The platforms include data deposited by GlaxoSmithKline, Astellas, Boehringer Ingelheim, Eisai, Eli Lilly, Novartis, Roche, Sanofi, Takeda, Union Chimique Belge, ViiV Healthcare, Johnson & Johnson, Medtronic, and Bristol-Myers Squibb.

To access each of the datasets, researchers submit proposals which are reviewed for scientific merit and adequacy of the research design to achieve scientific objectives by independent scientific review boards not associated with industry. Approved projects are granted access to the clinical trials databases.

DCRI reviewed all proposals with data use agreements since inception of each platform (first in 2013) and December 31, 2015. A total of 3255 clinical trials were available in the platforms, and only 505 unique trials have been requested. 234 study proposals have been submitted; 177 had been processed and met initial requirements. Independent review boards rejected only 12 of those proposals, and 113 projects have completed data sharing agreements.

Only five of those studies seeks to reanalyze the data looking at the primary endpoint, Navar said, and only one of those has so far been published. And yet even in small numbers, Navar sees a “success story” for open access.

“The one publication that we could find was a study that attempted to validate the findings of a randomized clinical trial and it found contradictory findings.”

Glass Half Full

Navar and her colleagues expected more confirmatory research, but acknowledge that doing such research can be tough.

“There are challenges to both conducting and re-interpreting analyses of clinical trial data,” she conceded. “But just because there are challenges in interpreting it doesn’t mean it shouldn’t be done. It’s a bit of a Pandora’s box, but right now we’re not even opening it.”

What the team did not expect, though, was the variety of other types of research being proposed.

“We were pleasantly surprised by the depth and breadth of the science being conducted using these data. It’s a testament to the wealth of information in these datasets and a huge benefit of these platforms,” she said. “It was fun to review the proposals and see how creative some of the science using these datasets is.”

For example, Navar mentioned four studies looking at the placebo or no-cebo effect. She points out that the datasets together could be a vast study to examine how people respond to placebo in trials.

And these research efforts are important. “A lot of the calls for open access focused on validation studies, but the other part was making data available that was done with the help of hundreds of thousands of volunteers,” Navar pointed out. “Scientists aren’t as interested in repeating someone else’s work. They’re doing really exciting work with these data: better studies of the disease state, methodological studies, how can we analyze clinical trial data.”

Navar and the DCRI team do hope that more researchers will make use of these rich datasets. She said that increased visibility of the resources was one of the reasons the DCRI team published the article.

She congratulated industry on the “strong effort” to create and fill the databases, and challenged researchers to now make good use of the results.

Navar also called for improved financial mechanisms to support validation studies, and for journals to commit to publishing both confirmatory and conflicting results. “If confirmatory re-analyses are never published, there will be a pretty big publishing bias moving forward,” she warned.