Drug Developers Eying Rollout of AI Legislation in the EU

By Deborah Borfitz 

October 22, 2024 | Eyes around the globe are on the EU Artificial Intelligence (AI) Act—the first comprehensive legal framework on AI anywhere—which will likely influence the direction of regulatory oversight measures adopted elsewhere, include other major markets such as the United States, according to Lori Ellis, head of insights at BioSpace, a news organization covering the life sciences industry. Differing perspectives on the Act, including if and to what extent it could be a “death knell” for innovation, will be aired during a panel discussion being staged at the upcoming Summit for Clinical Ops Executives (SCOPE) Europe in Barcelona, Spain. 

Ellis will serve as moderator of the conversation with questions probing the current and future state of AI in clinical research and the impact on the industry and dealmaking around the enabling technology. Panelists are Ricardo Gaminha Pacheco, strategic partnering, business development and licensing director at Insilico Medicine; Firas Abdessalem, head of GPV signal, risk and oversight for the digital service line of Sanofi; Christopher Hart, partner and co-chair of the privacy and data security group of the Foley Hoag law firm; and Artemy Shumskiy, an investor with LongeVC. 

Regulation is needed, in the opinion of Ellis, “to manage risks and verify that the AI systems are safe, fair, and offer the needed protections according to the data being used.” The alternative is “essentially a technological Wild West.”  

The quality of the regulation being brought forth is now up for discussion industrywide, says Ellis, adding that it should evolve along with advances in AI technology. Another big question in circulation is whether the European member states charged with implementing measures in the AI Act have the bandwidth to do it effectively, with the messy debut of the In Vitro Diagnostics Regulation in the EU still being top of mind. 

The EU AI Act is a massive document hundreds of pages long, but the gist of it is that AI technology gets bucketed into low-, medium- and high-risk categories that determine the obligations of developers and deployers to foster trustworthy AI and prevent harm to basic rights, safety, and ethical principles. 

The legislation has prompted varied, but unsurprising, responses from biotech, legal, and investor communities, says Ellis. The critical next steps are simply to promote collaboration around moving forward with AI and help all life science stakeholders understand the current global market and how to prepare for 2030 and beyond.  

Mostly High-Risk AI

Ellis, formerly a writer for Pharma Intelligence, says the life sciences industry often gets “separated into different silos [e.g., execution in R&D, manufacturing, investment]... that do not speak to one another. AI spans across those silos.” She enjoys “bringing stakeholders from each part of the industry together to make or strengthen those connections.” But the EU AI Act has a lot of nuances that have left the door open for dialogue and disputes. 

The legislation was announced earlier this year and went into effect in August, although many of its prohibitions do not kick in until later next year. Among the key points are that AI will be classified according to its risk, with what used to be deemed minimal risk potentially being subject to regulation given the advances seen with generative AI and the fragile interconnectedness of digital technologies. Most of the obligations fall to the developers of high-risk AI systems, although general-purpose AI developers and AI deployers also have some responsibilities while affected end users (e.g., patients and trial participants) have none.  

“What I would argue is that in the life sciences industry, most AI is going to be considered high risk,” says Ellis, with the remainder likely falling into the medium-risk category. The reason is that all of it, in one way or another, influences the lives of patients. In addition to consuming data, AI produces data and if it is misleading, it could result in harm. 

“The industry itself is dealing with sensitive data [personally identifiable patient and human subject information] that would be high risk,” she continues. Layer onto that the rise in cybersecurity incidents, such as the CrowdStrike update earlier this year that caused widespread IT outages due to a flawed software patch. “What if the update had influenced an instrument being used during surgery?”   
These are the sort of what-if scenarios both companies and regulators are focused on in regard to AI. 

In the realm of clinical research, quite a bit of AI is already in use—for everything from patient recruitment and data collection and analysis to the assessment of drug effectiveness and adverse event monitoring—and “at multiple times it can fail,” says Ellis. About 35 different potential data biases also exist, such as an inherent data bias in the inclusion-exclusion criteria that omits patients with a certain ethnicity or socioeconomic status. 

Human Oversight

An algorithm comes pre-trained on data but only subsequently gets “grounded” on clinical trial data, she emphasizes. On the user end, “you must make ensure you have the appropriate data for what you’re trying to do and... that the algorithm is running properly.”  

In terms of clinical studies, the EU AI Act expects transparency about how an algorithm was trained but also a risk management system, says Ellis. “You have to be able to identify what that acceptable risk is... what is going to be monitored... and [have] human oversight” to confirm any deployed AI tools are doing what is intended. 

AI technology unquestionably has value. “AI does not get tired, AI can review massive amounts of data, and AI can quickly give you results,” Ellis says, all of which should lead to cost savings. The issue is having a human to review both the data and the results. 

Finding Flaws

Compliance with standards related to data privacy and data bias is nothing new to the clinical trials enterprise and should be built into any AI algorithm being used, Ellis points out. “However, there are two parallel thoughts in the life sciences industry right now... those that believe AI will replace humans and those that believe it is a tool,” although the second perspective is more prevalent.  

“AI is a tool and that’s all it is,” says Ellis, and as such provides many benefits. “It can even reveal societal flaws that appear in clinical trials, such as a shortage of enrolled minority patients.” 

The U.S. Food and Drug Administration has already put out a series of guidances specific to Software as a Medical Device, a category it defines as any medical device that incorporates an AI component, and the agency could decide to take on regulation akin to the EU AI Act, she says. It may largely come down to Congress working on guidelines as well. States, in the meantime, are working on AI regulations for other industries, which is worth watching in the future as AI adoption in the life sciences sector unfolds, she adds.  

Even prior to the EU AI Act, Ellis notes, she witnessed thoughtfulness within the industry about how to responsibly use technology when dealing with medications and med tech devices consequential to the lives of patients. Companies in any case aren’t apt to recklessly use AI technology, since the market penalties alone would be severe. 

Load more comments
comment-avatar