Last week, worldwide leaders gathered at the Artificial Intelligence in Bioscience Symposium in London to examine the growing role AI plays in healthcare, highlighting neuroscience and particularly Alzheimer’s as one of the most promising applications of this new technology.
Within the last few years, the field of bioscience witnessed an exponential expansion, especially with the development of the omics — including genomics, epigenomics, metagenomics and metabolomics. Now, the use of artificial intelligence could take our understanding of biology one step further, integrating all the gathered knowledge to generate valuable predictions for therapeutic applications.
“We’ve learned that you cannot make a definite statement about a particular gene,” Winston Hide, Professor in computational biology at The University of Sheffield, explains. “An important example is the recent failure of a BACE1 inhibitor for the treatment of Alzheimer’s disease.”
Hide is referring to Merck’s verubecestat, which was dropped in February after a failed late-stage Alzheimer’s trial. The news came shortly after a big Phase III failure with Eli Lilly’s solanezumab, highlighting our limited understanding of a disease area in which success rate for new treatments is already extremely low.
According to Winston Hide, data reproducibility and unraveling the differential contribution of multiple genes in relevant biological pathways are some of the big issues in this field. This is where AI comes in, thanks to its capacity to massively integrate whole-genome data to identify the most relevant pathways, increasing the chances of selecting the best target for a new therapy. “From these studies, we’ve found that the top targets for Alzheimer’s were non-coding regions like THAP9-AS1, which potentially could be used as a target,” says Hide.
AI could be a breakthrough not just in Alzheimer’s, but generally in anything related to the study of the most complex organ we have: the brain. “We have about 18 billion neurons in the cerebral cortex, but there is not enough computational power to visualize it yet,” says Caswell Barry, Principal Research Associate at UCL Cell & Development Biology.
According to Barry, the solution might come from imitating the brain’s own ability to process information through deep learning. This is clearly seen in the fast development of cutting-edge pattern recognition algorythms, which are now starting to be applied for healthcare applications such as cancer diagnosis. “AI has pretty much surpassed the human ability to label images.”
However, can we really make good models of the brain by using deep learning? “The answer is yes, for the visual system at least,” says Caswell Barry. But modeling the whole thing is a different question. “Some research focuses in simulating the brain of C. elegans, which is quite simple since it just contains 302 neurons, but in bigger animals there’s always inter-animal variability. The same model will not work for the brains of all people.”
Beyond the technical issues still pending to be resolved, applying artificial intelligence to the study of the brain poses serious ethical questions. Navin Ramachandran, Consultant Radiologist at UCLH and co-founder of Peach Lab, a company devoted to the study the public health policies, raises multiple concerns about the implementation of AI in the clinical routine.
“How do we know if the training data is good enough? AI can analyze the data and give you a recommendation, but we don’t know how it really happens. Can we control the development of AI? Should we control it? When it concerns life and death decisions, this can be controversial.”
Some of these issues can actually be anticipated by those AI is now encountering in social media. “Facebook shows you what it thinks you want to see based on your profile. It only shows you posts that reinforce what you think rather than challenge you,” explains Ramachandran. And if translated to medical decisions, a similar bias could have extremely severe consequences.
“It is absolutely obvious that the potential of AI is huge, massive,” says Margaret A. Boden, Professor of cognitive sciences at the University of Sussex. “In the future, it could potentially go beyond clinics and help us to understand more fundamental questions such as how memories are made. However, its contribution is still small due to technical limitations.”
“Currently, only 0.5% of the information available is actually used,” explains Jackie Hunter, CEO of BenevolentBio, a company that works on the application of artificial intelligence in the drug discovery process. “The accessibility of the data and its quality are still a major concern.”
However, she seems sure that the field will keep making significant advances despite the pushback from skeptics. “Some time ago, someone from an audience of pathologies told me that I was talking rot,” says Hunter. “There will always be dinosaurs somewhere, but they will go extinct someday.”