Morality And Machines: How To Use AI Ethically For Drug Development

Contributed Commentary by Jabe Wilson

November 15, 2019 | We are at a turning point in the history of life sciences research. Artificial Intelligence (AI) is being deployed by companies at a phenomenal speed, with research suggesting 70% of organizations are now using AI, machine learning, or deep learning as part of their research programmes, up from 44% two years ago. However, as AI becomes mainstream, many people have raised legitimate concerns about the ethical implications of using it for drug discovery and development. AI can and has been seriously misused in several fields, including criminal justice where people have ended up wrongfully imprisoned thanks to AI, so it’s critical that life sciences organizations understand how to deploy AI without harming patients.

There’s no one-size-fits-all approach to guarantee ethical behavior from researchers using AI systems. Each organization will need to find the approach that works for them, based on the type of research they’re doing and the systems they’re using. However, one key step that should be taken by everyone is ensuring full transparency throughout the process. This means allowing researchers to go back and review the algorithms being used, the data that the calculations are based on, and the workings of the scientists who interpreted the results so that there can be accountability at every step. This not only helps protect patient safety, but it also makes for better science if researchers are more easily able to critique and evaluate each other’s work.

The Dangers of Bad Data

One of the reasons transparency is so important is because it’s almost inevitable that bias will, at some point, be introduced into AI systems. Men tend to make up over 80% of AI researchers and developers, and minorities are even more under-represented than women; at a recent AI conference, out of 8,500 attendees, only six were black. These types of imbalances mean that, whether consciously or unconsciously, certain biases will be reproduced by the AI systems, affecting the results produced. When it comes to drug discovery and development, bias is even more problematic because researchers are often working with very small sample sizes from clinical trials. Because smaller samples are far less likely to be representative of all patient populations, this can become an issue when AI systems extrapolate findings to the wider public.

For example, clinical trials generally involve far fewer women and ethnic minorities than white males, and gender imbalances have created significant negative consequences for women. One particularly notable case is Ambien. In 2013, the FDA cut the recommended dose for Ambien for women in half, after evidence emerged that the drug affected men and women quite differently. In the 20 years prior to the FDA updating the recommended dosage, there were over 700 reports of motor vehicle crashes associated with Ambien which endangered the female patients, their passengers and other road users. If the clinical trial data for Ambien was used to ‘train’ an AI system, that system could easily absorb this unintentional bias and make the same calculations for other drugs in development, which could lead to further negative consequences.

Show Your Workings

Given that it’s impossible to fully eliminate bias when it comes to drug discovery, organizations and researchers need to ensure they account for its presence and take precautions to minimize the impact. Although many companies are investing in AI, we are still in the early stages of learning how to use it effectively for research. Additionally, in order for AI-driven drug discovery and development to be viable in the market, regulators will need to design new frameworks and pass additional legislation to ensure that unethical use of such tools is prohibited, and bias reduced wherever possible. Such frameworks and legislation will require regulators to closely analyze companies’ algorithms, datasets, and calculations to determine if the conclusions drawn are trustworthy.

In short, if companies want to get new drugs to market quickly, they will need first class data management throughout so that regulators can easily review and approve their work. Improving the transparency of AI research can be complicated, but it is essential. Most organizations have a confusing mesh of incomplete drug and patient data spread across various siloes. Important information is regularly saved in various formats, often with no central repository for everyone to access. Not only can this make it extremely difficult for AI tools to interpret data correctly, but it also makes it harder to untangle what went wrong when there is a problem. To function properly, AI systems need clean, harmonized data, complete with the proper taxonomies. Having this single rich source of data for R&D not only improves AI performance, but also allows researchers to better evaluate the validity of the conclusions.

Black Box Or Transparent Process?

Today, AI is creating a sea-change in how drug discovery is done. Pharma companies cannot afford to ignore its potential and those that do will be left behind their competitors. Yet we cannot afford to fall into the trap of seeing it as a silver bullet. Unless researchers are careful about how they use their AI tools, it could end up doing more harm than good. When it comes to something as important as drug discovery, AI can’t simply be a black box that spits out answers we’re unable to verify or interrogate. We need greater transparency around how AI tools operate and how they have reached the conclusions that they have. Not every firm can easily challenge the algorithms these systems are based on, especially researchers without a background in data science. However, every firm can, and should, do more to improve the quality and cleanliness of their data to make sure that we don’t end up with more cases like Ambien where patients suffer thanks to undetected biases.

Jabe Wilson is the Consulting Director for Text and Data Analytics at Elsevier (RELX). He has a background in Artificial Intelligence (AI), which he has been working with for almost 30 years, since studying at the School of Cognitive and Computing Sciences at Sussex University. He has broad experience in industries delivering digital content and has participated in several start-ups as well as teaching interaction design at the Royal College of Art (RCA). He can be reached at jabe.wilson@elsevier.com.