Clinical Trials Adrift in the Age of Social Media
By Deborah Borfitz
June 13, 2024 | Many forces outside of the healthcare setting shape people’s decisions about whether to join or stay in a clinical trial or comply with study-related requirements. While no strong evidence exists that influencers on social media are impacting the integrity of trials, it has become self-evident that they have the power to undermine the authority of traditional sources of information and their sway is therefore an issue “very deserving of our attention,” according to Wen-Ying Sylvia Chou, Ph.D., program director in the Health Communication and Informatics Research Branch of the Behavioral Research Program at the National Cancer Institute (NCI).
Rare disease communities often lean on social media to seek and provide social support and keep one another informed about available clinical trials, she says. However, many individuals in these communities are vulnerable because of poor prognosis or lack of treatment options.
This vulnerability could drive some to lean on online information that is of mixed quality or perpetuates false hope when patients are told that the medical establishment has little to offer for their illnesses, continues Chou. It might also leave them with a vested interest in seeing trials go in a certain direction or improving access to therapies not yet approved.
Regulators such as the U.S. Food and Drug Administration (FDA) recognize that they need to understand and address how the online landscape is altering conversations about drugs and drug trials, Chou says. While the FDA is unlikely to itself become a social media influencer, it might think about strategically partnering with trusted voices—be they outside experts with a large following, community leaders, or even celebrities—which she regards as important but underutilized allies in countering misinformation.
“I think we should aspire to become more trusted sources of medical information,” says Chou, adding that exactly how to accomplish that is challenging and complex. Trust is “very nuanced” and trust toward time-honored institutions has eroded in certain communities. “It is not really that people are far more distrustful of science, but fewer people remain on the fence—those who were wary of medical establishments have become more distrustful, while others have become more trusting of science.”
Only a decade ago, social science researchers weren’t particularly concerned about online health information other than noting that patients started bringing it to their doctor visits, she says. But due to the spread of misinformation, health literacy challenges in some, and the changing online information landscape, patients’ trust in doctors steadily declined over the ensuing years, potentially jeopardizing the once-sacred physician-patient relationship.
In those early days, data scraping through outfits such as CrowdTangle made it easy to eavesdrop on online chatter about health, such as discussion about drugs people were taking and the side effects they were experiencing, continues Chou, adding that it was used “almost like pharmaco-surveillance work.” Social media effectively functioned as an innovative data source and was the basis of many published papers, including her own.
Nowadays, accessing large volumes of data on social media platforms such as X (formerly Twitter) has become more difficult and costly, Chou says. Data sampling challenges have also grown along with the size of the internet and social media channels.
What may be more illuminating is focusing on online communities gathered for specific purposes and studying their discourse, to understand their concerns and priorities, says Chou. Research on misinformation, her specialty, also runs up against unfounded fears that it is an attempt to control free speech.
Further adding to the predicament is increasingly powerful and ubiquitous artificial intelligence (AI) and the ability to create “deep fake” images quite easily to, for example, generate a clip of an influential figure giving a fictitious speech, as has already been seen in the political realm, she continues. Improperly trained, AI can also perpetuate biases to fabricate a glamorous, idealized version of a person struck by cancer.
Too much information may ultimately be the bigger and more concerning problem, says Chou, since it might cause people to become apathetic and “less trusting of anything” even if it has been authenticated or shown to be patently untrue.
Sophisticated Users
Social media has probably had more positive than negative effects on clinical research on rare diseases, echoes Marshall Summar, M.D., CEO of Uncommon Cures, which specializes in this arena. “The family groups and people who are participating in research stay very attuned to who is developing what ... [because] these are folks who are very strongly looking for cures.”
Summar is well-known for his pioneering work in caring for children diagnosed with rare diseases. “Over the years, my patients often knew a trial was going on before I did,” he says. The communities that have formed around different rare diseases—there are 12,000 of them—tend to be small groups who do a lot of information sharing.
And the information is generally good because sponsor companies “have to interact with them pretty closely,” he notes, as do their doctors. “Most of the rare disease family organizations use social media heavily [e.g., Facebook groups] and that’s... the connector.” They are consequently more sophisticated consumers of information relative to patients with more common diseases.
Rare diseases being genetic in nature, they are something “people have been dealing with their entire life at some level” rather than sporadically or only at an older age, Summar points out. Probably the closest comparator in the larger disease community would be groups such as the NCI or American Cancer Society which are “still fairly removed from patients’ own network.”
Moreover, with only a few hundred or a few thousand patients sharing any one rare disease, “everyone gets to know each other” in contrast to the anonymity of disease communities populated by millions, says Summar. Misinformation doesn’t have much of an opportunity to take root because any naysayers are recognized as such and new voices in the group are, “going to be viewed with skepticism until they have established themselves.”
Compelling Argument
Chou has been working in the misinformation and disinformation space for about six years and says the issues have only become more complex, nuanced, and difficult to study since the Cambridge Analytica scandal of 2016 when Facebook data was harvested for purposes of swaying an election. “We are working hard at what I sometimes call mitigation because the harm is not totally going to go away, but if we can lessen the impact at least that’s a start.”
In terms of clinical trials, she recalls, concerns were raised about possible unblinding of treatment assignment during a 2018 NCI clinical trial meeting. So, this is not a new worry, but to her knowledge, this concern is not yet backed by hard evidence. Verified cases of misinformation and vested interests at play are limited to the larger world of healthcare.
But the same sort of deceptive practices could afflict information sharing about clinical trials. Some social media influencers have incentives, financial and otherwise, to prompt an emotional reaction from their followers and sometimes deal in distrust—to, for example, “cast doubt that physicians have the best interests of patients at heart,” says Chou.
Fear and anger are often used to make messages stickier, and the purveyors sometimes have ulterior motives—be it to make profit, create mistrust, or turn people away from standard treatment. Chou says that in her experience the more successful attempts at addressing misinformation don’t necessarily come from correcting it but in helping people spot the typical tactics of people spreading incorrect or misleading information, such as inciting emotional discourse or mixing scientific language with personal observations.