Navigating A Connected World: Sensors In Clinical Trials And Care

By Allison Proffitt 

April 11, 2017 | It’s a little bit of a brave new world, says Jeremy Sohn, VP and Global Head of Digital Business Development and Licensing at Novartis.

In this case, he’s specifically referring to vast questions surrounding connected sensor data, and how it might be used to improve drug development, clinical trials, and patient care. He lists a stream of questions that Novartis has been busy exploring: What are the range of connected sensors? Are they wearable or otherwise? What do the data look like? What do these devices look like? How would we incorporate them into studies? How do we build them into our existing processes? How do we move the data into our data management workflows? How do we analyze the data?

Considering these questions is no easy task.

“The data that flow from a lot of these sensors is very different from the data we’ve collected in the past,” Sohn explains. “Our endpoint datasets tend to be very discrete; they’re at specific time intervals within a study. They’re usually not continuous. We usually don’t get multiple reads. We usually don’t get large streams of data.”

Sensor data—whether from wearable devices or other connected sensors—changes everything about how studies are designed, how data are collected, and how progress is analyzed.

Brave new world, indeed.

Sohn“The challenge for pharma—and for healthcare more broadly—is making sure we understand the data we collect from those wearable sensors and understanding how it can be applied,” Sohn says. The stakes are high. Data collected in trials support regulatory submissions. “We really need to understand… what that data means in context, either comparing it to an existing set of endpoints, or building an understanding of how it can and should be a new clinical endpoint.”

Novartis has reached an inflection point, Sohn said. Sensors need to be incorporated into clinical trial programs. But when?

The life cycle of a clinical trial program can be seven to 12 years from a Phase I to the end of a Phase III program, Sohn says. Choosing to incorporate technology means trying to align the trial’s lifecycle with that of the technology.

“Given Moore’s Law and general commercial life cycles, those technologies may change as early as a year from now. We need to contemplate not necessarily the technology, but the endpoint itself, the data that’s being generated. We need to create a framework for being able to switch out the technology, but still focus on the endpoint over a long period of time.”

It's a problem that Jason Laramie’s group at the Novartis Institute of Biomedical Research (NIBR) is spending time with: “How do you analyze the data? What learnings can you get from the data? And how do you make the algorithms and things that we’re using potentially for the later stage trials agnostic to the device itself, so that the learnings are over the course of the entire 10-15 years of running the program, but the devices may come and go.”

Laramie, Executive Director, Head of Quantitative Sciences and Innovation, is deploying sensors in NIBR’s earliest proof-of-concept trials, using the data gathered to inform internal go/no-go decisions about a program or compound. “We end up being effectively the sandbox, if you will, for Novartis,” he says.

Algorithm Challenges

On the agenda in the sandbox: algorithm wrangling.

Heart rate data from an Apple Watch or a FitBit or a clinical heart monitor may look the same to the user, but it didn’t start out that way.

“The raw data coming off the device may not be the same; it’s the processed part of that—how many beats per minute—that should be compatible from one device to the next,” Laramie explains. Complexity enters when an algorithm tries to interpret things like movement.

“Measuring a step is not necessarily new, but how it’s measured is relatively new,” Sohn says. “There the algorithms are dramatically different from device to device. It’s important to understand how those calculations are actually being made. If we do shift from [Sensor] Brand A to [Sensor] Brand B, it’s important to understand that when you’re looking at the raw data, there’s a lot of interpretation—and in some cases even cleaning up—that’s done by the algorithm itself.”

And the differences extend to patient populations as well.

“We have examples… of the algorithms being built off of healthy volunteers, step algorithms, in this case for FitBit and other devices like that. The measurement of how many steps someone takes is typically built off the gait of somebody who is healthy and walks around,” Laramie explains. But those individuals are often not representative of trial participants. “We’ve had to go back and rewrite the algorithms to account for frail, elderly individuals to get the step counts accurate.”

For the most part, companies understand that their products are being used in the clinical research space, and they’ve been very open with Novartis about what’s in the black box, Sohn says. “It is a little bit of a secret sauce in many cases to these companies, and we don’t take that lightly.”

Consumer vs Clinical

LaramieThis begs the question of whether commercial devices are really right for clinical applications at all.

“I think it’s a complex question; there’s not really a right or wrong answer,” Sohn laughs. “Ideally, we would like to rely on consumer-based products.”

But Laramie emphasizes that the clinical team’s needs come first. “[We] let the clinical teams talk about what their clinical need is, and then try to match the technology to it,” he says. “We never want to get out in front in that.”

Sohn says it benefits clinical research and participants to be able to use consumer-oriented devices, particularly as sensors move into the clinical care paradigm.

“It’s one thing to measure an endpoint and prove that a drug is working... for the approval process. But ideally, we can carry that forward into clinical practice and give people ways to measure the impact that their drugs are having,” Sohn says. “Expecting that an individual will want to purchase what is typically a more expensive, less accessible, and maybe less integrated system like a medical device is not ideal.”

He continues: “One of the key considerations we have in our studies—whether clinical research or clinical practice—is trying to find technology that people want to use or that people find easy to use.”

Consumer products are designed and priced to appeal to consumers. “The challenge that we have,” Sohn says, “is making sure that we maintain the rigor that is required from a clinical endpoint, and clinical research perspective, and make sure the data we get is fully validated.”

Easy Peasy 

Sensors present technology challenges at every turn, but not necessarily the ones you’d expect. For instance, data volume isn’t yet much of a hassle.

“We’ve measured close to 70,000 hours’ worth of patient activity in a number of different trials,” Laramie said. “It turns out, that the data is extremely compressible, and so we actually don’t have that big of a data problem. Whole genome sequencing, or some of our sequencing efforts, dwarfs this kind of data big time!... We have not actually needed to throw anything out, because these compression algorithms are really good.” There is a caveat: “That may change as we move to things that are non-accelerometer-based,” he says, “but we haven’t gotten there yet.”

Data size is challenging not in the ability to store or analyze it, Sohn said, but in memory limitations of the devices, and network speeds for patients uploading their own data. For example, many devices now are programmed to make tradeoffs between battery life and memory, or the number of variables they can track at any given time. Patients may have bandwidth problems uploading their data at home. Sohn expects both issues to be resolved in a few years as battery technology improves and network connectivity is strengthened.

Embarrassment of Riches 

The more pressing question, Sohn says, is how to choose which data to use. With a continuous flow of data, when do you start collecting? When do you stop? Is it better to average over a period of time, or collect continuously, really only looking at certain intervals.

Defining meaningful intervals will require data scientists and clinical researchers working closely together, Sohn says.

“It’s called feature engineering in the data scientists world,” Laramie interjects. “We can define the clinical variables a whole bunch of different ways, and then figure out which ones make sense. We’re still playing around in the data in that sense.”

Within NIBR, Laramie has the luxury of gathering data first, and then exploring to find the best answers to the right questions. Sometimes researchers are trying to match a traditional primary endpoint with equivalent findings from sensor data. Other times, the goal is to improve upon a primary endpoint.

For example, Laramie is considering ways to improve on the standard six-minute walk test. “We are exploring, on the back end, this feature engineering space,” he explains. What do we learn from a six-minute walk test? “Is it the distance somebody walks? Is it how fast they walk? Is it how many steps they take in a certain interval of time?” he asks.

"We've been pairing the standard clinical endpoint with a device of some sort to see if we can find a correlation, and that just requires trying out different algorithms for picking that up.”

The end result—always—should be better care, Sohn says. Sensors are only one means to that end.

"Health and ‘feeling better’ has a lot of different measurements. As we try to use technology to try to become smarter about that, it doesn’t mean we’re going to capture everything. In some situations we’ll want to use technology to replace the patient reported outcomes, but in other situations patient reported outcomes will be very important,” Sohn says. 

“It’s not perfect, but hopefully it’s better. And that’s what we’re looking for: How do we get better at what we’re doing?”