Right-Sizing Site Selection: Tapping The Evidence

Third installment in a four-part special report

By Deborah Borfitz

June 26, 2019 | Pfizer, Allergan, and Merck each take a decidedly different approach to selecting sites and investigators for clinical trials. But across the board, data plays a starring role.

Through the development of multiple analyses, Pfizer has learned that aggregating data at the site level yields greater statistical significance because of the larger amount of data points available relative to individual investigators, says Oriol Serra, the company's head of site intelligence and selection. Site-level aggregation also allows for better trend analysis of performance over time. "Once we have identified the facilities targeted in the countries of interest, we review all the PIs [principal investigators] available at those facilities and perform another layer of assessments to recommend the most adequate PI for the study."

Tracking trends in facilities over time turns up more differentiating characteristics, including some by ownership type, Serra says. It also provides a "much richer view" of the performance of investigators operating out of those sites on any given study.

The latest methodology, fully implemented last fall, characterizes performance using a site quality risk dashboard based on nearly 30 metrics and algorithms providing a picture of current state, predicted future state, and an "anomaly detection" section identifying sites that are outliers within a study, explains Jonathan Rowe, head of quality performance & risk management, global product development, at Pfizer. "The purpose is to ensure we're protecting patients' rights, safety and welfare… and gathering high-integrity data so we can have successful submissions."

Pfizer was an early industry pioneer in the use of site performance evaluation tools and previously used a "site health" model of advanced algorithms that focused on four areas, says Rowe. The recently launched approach crosses 10 areas and includes visualizations to pinpoint sites at risk of Good Clinical Practice underperformance.

The current-state section of the dashboard is a rollup of a series of metrics taking a holistic view of performance—including not only how well sites are following good clinical research practices (e.g., protocol compliance and complete trial master file), but also how well monitors are doing their job, Rowe says. Some of these metrics are standardly used and have been supported by organizations like Avoca and the Metrics Champion Consortium, he adds. More unique are the increasingly precise algorithms that can only be built from Pfizer’s large quantity of historical performance data.

A predictive model separately looks at the probability of inspection findings based on prior audits and site characteristics. It's accurate seven out of 10 times, a "spectacular" track record given the difficulty of forecasting human-driven activities, says Rowe, and is expected to continue improving.

The intent with all this is to "hardwire" a reliable, holistic approach to evaluating sites, says Serra, which has grown more robust since Pfizer brought the function back in-house. In several therapeutic areas, the signal between quality and performance indices was relatively weak when site feasibility was being outsourced. Doing the work internally has also allowed Pfizer to begin the process earlier, "in many instances a year ahead of the first patient study visit." Already, cycle times and investigator interactions have measurably improved, he adds.

Quality data is accessible on a global scale, Serra says. Other types of information are becoming "less challenging" to access even in areas like Africa, Asia, and Latin America. Data-sharing efforts such as the Shared Investigator Platform (SIP) of TransCelerate has been a "change of trends" that he expects will accelerate over the coming year.

Connecting the dots between multiple datasets—e.g., claims data, competitive intelligence, facility metrics from the Shared Investigator Platform and Pfizer's own internal performance data—describes an "intelligent site selection" process that discovers never-before established performance indicators, says Serra.

Using data visualization software to overlay different types of data across time dimensions is also expected to help identify attributes of "super sites" of the future and referral opportunities to increase enrollment of underrepresented patient groups, says Mohanish Anand, head of study optimization, global product development. "We want to keep the investigator group fresh, and aligned with the needs of the studies we’re building," which will require adapting to the needs of Millennials—notably, by ensuring trials are hyper-focused on patient-centricity.

Pfizer starts by looking across geographies at known high, average, and low performers based on compliance with study requirements, says Anand, but also by gathering intelligence on investigators who have completed one or two studies with other companies and may be good targets for relationship-building. "We capture as much information as we can… and position them [via training support] for much better success by the time we do site selection."

Once those relationships are established, Pfizer selectively shares its portfolio outlook for the coming year so sites can prepare to participate, says Anand. Those not selected for a study can talk to a company representative to understand why. "So, when we do feasibility, it isn't the very first interaction a site has had with Pfizer."

Doing More With Less

Like Pfizer, Allergan takes a centralized approach to site selection and starts by identifying a target site profile, and then mining multitudes of databases, including TA-Scan and the TransCelerate Investigator Registry. That master list of sites that look viable periodically gets supplemented by an outside site identification vendor, says Lorena Gomez, the company's director of global study startup and essential documents.

Site_selection-3(1)

Some companies start with a core set of strategic, go-to countries supplemented with countries recommended for commercial or patient availability reasons. "Allergan is a lean organization, so we ideally try to identify enough sites within a country to justify putting a resource in that country," Gomez says. It's difficult to justify the human resource and submission costs for countries with only one to two sites, except for trials where subjects are extremely difficult to find.

"I think having all of these new resources and databases available helps us make more informed decisions about the sites we select," including number of previous studies they've done and the percentage of time they met enrollment goals, she continues. "In the past, we had to fly blind and trust what sites put on feasibility questionnaires was accurate."

Study teams can also tap site monitors who are "front and center at sites," to look for site recommendations says Gomez. For one study that was only actively pursuing gastroenterologists, a site monitor helpfully pointed to a multidisciplinary practice where the principal investigator was an internist but had multiple gastroenterologists from whom he received referrals. Site monitors are also in the know about how studies get operationalized at sites, which can help to inform protocols and strategies that can "make or break success of a study."

Boots on the Ground

IQVIA has people locally situated the world over who have a good understanding of sites in their country and can flag changes that have occurred that wouldn’t necessarily "percolate up into our data quickly," such as an investigator that has retired or relocated or an upcoming election that will be highly contentious, says Allen Kindman, IQVIA's vice president of clinical planning and analytics. "Large sponsors generally have 'must-have' countries driven by specific business needs," he adds, as might smaller companies that are aligned with a key opinion leader or national coordinator. But a "significant number" of study sponsors rely on IQVIA to choose countries and investigators when additional sites are needed.

Site_selection-3(2)

Investigators with past performance issues aren't necessarily disqualified from future studies, Kindman notes. IQVIA can often "manage around… quality issues that are not insurmountable,"—perhaps by scheduling more frequent clinical research associate (CRA) visits or providing additional training—in cases where investigators otherwise enroll well, are good performers and "sponsors agree to include [them] in a study."

Seeking Site Feedback

When selecting trial sites, Merck uses a combination of analytic data sources and first-hand country knowledge, and the starting point depends on the therapeutic area, according to Laura Galuchie, director of global clinical trial operations. The anticipated recruitment rate is based on how sites have historically performed doing similar types of trials, but the "final decision" on site selection is based on input from in-country staff and conversations with site investigators and study coordinators. Typically, Merck will use available evidence to select countries for an entire study program before deciding which ones best suit an individual protocol, she adds.

The approach allows Merck to both identify sites "able and willing" to conduct a study and potentially adjust an aspect of the protocol they find troubling—for example, a visit window that is out of step with the typical visit schedule in a country, says Galuchie. But some changes can be difficult to make at the site selection stage, including those accommodating when medicines customarily get delivered in a region or concomitant medications on the local formulary. "We try to avoid that situation as much as possible… some of our trial teams get feedback from sites up front."

The dialogue with sites provides a needed "level of contemporaneousness" that even the best databases can't provide, says Galuchie, including recent changes to facility and staff as well as current workload. "Sites may be very busy with local academic studies in addition to sponsored trials [listed on clinicaltrials.gov]."

Merck is currently being onboarded with the SIP and in the process of moving trials onto the platform, says Galuchie, who is the company's representative on the TransCelerate oversight committee as well as internal program lead coordinating activities across TransCelerate initiatives. Anecdotal comments from colleagues actively using the SIP is that it will be useful in introducing Merck to investigators having extensive research experience with other companies and an interest in taking on new studies. "That's different than approaching someone who has no research experience at all… [and] a little bit easier," she notes.

SIP will also allow Merck to rebuild its site feasibility questionnaire to be more protocol- and compound-specific, Galuchie adds. That's because it will be able to get answers to questions it is obligated to ask, per Good Clinical Practice guidelines, directly from the platform.

Predicting Performance

Predictive analytics powered by artificial intelligence (AI) is "absolutely a step in the right direction" for the site selection process, says Jim Kremidas, executive director of the Association of Clinical Research Professionals. Notably, it can point study sponsors toward investigators and institutions seeing the specific kinds of patients needed for trials and their standard of care.

IQVIA already relies heavily on machine learning to profile sites and investigators, says Kindman, adding that "one of intentions" of the IMS-Quintiles merger was to develop such technologies. "It's now business as usual at this point—better, faster and more efficient and accurate—but obviously it has taken us time to demonstrate that prospectively and those metrics are starting to come to life."

Machine learning is currently used to assess sites and investigators in three ways, including how they have historically enrolled for trials run by IQVIA, says Kindman. Second, it is used to judge the excellence of their work based on such variables as number of previous quality issues and their dropout and screen fail rates. Finally, machine learning discovers if they've had experience in the specific indication of interest (e.g., ductal carcinoma in situ) or more broadly in the therapeutic area (e.g., breast cancer).