Scroll for more
events Sep 16, 2016

Artificial Intelligence in Bioscience

Hosted by

BenevolentAI

Date

Sep 16, 2016

Time

08:00 AM

Venue

The Royal Society, 6-9 Carlton House Terrace, London SW1Y 5AG

Open in Google Maps >>

HOW ARE AI AND MACHINE LEARNING TRANSFORMING BIOSCIENCE?

The future progress of biomedical research, and bioscience in general, depends not only of the availability of high quality data but the ability of researchers to mine that data to create new knowledge.

​The power of machine learning and artificial intelligence to augment human insight has already been demonstrated in key areas such as pathology and healthcare delivery.

However, to truly realise the benefits of these technologies, new ways of working, collaborating, and co-creating between engineers, mathematicians, biologists, chemists and clinicians will need to be fostered and supported.

​As part of this symposium we want to help in that process and work with our Knowledge Quarter partners to bring together experts in the field to discuss the opportunities and challenges in integrating intelligent technologies into biomedical research.  

With this in mind, the symposium will bring together people from industry and academia with expertise in the integration of artificial intelligence and machine learning in bioscience.

Programme:

Registration and breakfast

Introduction

Session 1: The Potential of AI

Session 2: Ethics – Panel Discussion

Session 3: Therapeutics Applications

Conclusion

Networking Drinks

Speaker details:

Jérôme Pesenti, CEO of BenevolentTech

Between IBM Watson's victory against Ken Jennings at Jeopardy and the subsequent creation of the IBM Watson business unit, Google and Facebook's introduction of deep learning algorithms for speech and vision recognition in their consumer devices and applications, and the current hiring and buying binge around everything and everybody related to machine learning, there is an emerging feeling in the software industry that artificial intelligence is finally on the brink of its big breakthrough: AI is about to change our every day lives. I will describe the new approaches and advances in AI, show that there is substance behind the buzz through real world applications, and give a glimpse at what the future may hold.

Barney Pell, Ph.D, Owner, Decision Theory

In theory, AI algorithms can be applied to a wide set of problems of importance to humanity.  In practice, however, technology itself is only a small portion of the overall success of an application.  This talk discusses crucial issues that must be addressed in most successful applications of Artificial Intelligence.  Key issues include quality, robustness, usability, optimal mix of humans and automation, trust, evolution, and economics.

Jackie Hunter, CEO of BenevolentBio

The drug discovery and development process has undergone few changes in the past decades despite a seismic shift in the amount of data available to inform the process. The rising costs of drug discovery and the lack of impact of new knowledge on the success rates at various stages of development mean that the current model is not sustainable in the longer term. Artificial intelligence and machine learning have the potential to reduce cycle times and improve success rates in the clinic. Some examples of how this can be done will be provided.

David Jones, The Francis Crick Institute & UCL

Predicting gene function and relationship to disease is a key to future developments in translational medicine. Even with all of the collected sequence data and postgenomic data, computational methods for linking function to sequence and sequence variations are urgently needed as there are still very many genes of unknown function and a far greater number of unknown gene-disease relationships. Over the past 5 years, we have collected a very large amount of both experimental and predicted functional data (calculated using the UCL Legion Supercomputer) for every human gene e.g. sequence similarity, gene co-expression, predicted gene fusions and so on. So far we have used this data to predict the biological functions of functionally uncharacterised genes with a lot of success. To finish, I will briefly discuss an interesting new avenue we are currently exploring, namely whether we can extend our work to learning disease-gene associations from Mendelian genetic disorders (i.e. inherited disorders), where the causal relationships between genes and disease mechanism are commonly known, and applying these patterns to non-Mendelian diseases where the relationships are generally not known.

Brent Mittelstadt, Researcher in information and medical ethics

Regina Barzilay, Professor of Computer Science, MIT

Cancer inflicts a heavy toll on our society. One out of seven women will be diagnosed with breast cancer during their lifetime, a fraction of them contributing to about 450,000 deaths annually worldwide. Despite billions of dollars invested in cancer research, our understanding of the disease, treatment, and prevention is still limited.

Majority of cancer research today takes place in biology and medicine. Computer science plays a minor supporting role in this process if at all. In this talk, I hope to convince you that natural language processing (NLP0 as a field has a chance to play a significant role in this battle. Indeed, free-form text remains the primary means by which physicians record their observations and clinical findings. Unfortunately, this rich source of textual information is severely underutilized by predictive models in oncology. Current models rely primarily only on structured data.

Magnus Rattray, Prof. of Computational and Systems Biology, University of Manchester

Biological systems are highly dynamic and must respond rapidly to external stimuli and an array of feedback systems at different scales. We are using probabilistic models to help model dynamics at different scales. Many of the models we have developed are based on Gaussian processes which are convenient non-parametric models that can represent time-varying functions with diverse characteristics. The advantage of using Gaussian processes lies both in their flexibility as models and their tractability when carrying out inference from data. We are using smooth models for data averaged over large ensembles of cells and stochastic processes for modelling single-cell time course data from microscopy experiments. I will give some examples of our recent work including: modelling delays in transcription dynamics from high-throughput sequencing time course data, identifying a sequence of perturbations in two-sample time course data, modelling bifurcations in high-dimensional single-cell expression data and uncovering periodicity from single-cell microscopy time course data.

Anita Schjøll Brede, CEO, Iris AI & Victor Botev | CTO, Iris AI

At Iris AI, we believe that humans have already discovered the solutions to most of our pressing problems. The challenge is that with >3000 scientific papers published daily no human can read and understand everything. That's why we are building Iris; an AI that can read science and help us connect the dots of what we already know. We take you through the current Iris algorithm to see what is possible already today in the field of Natural Language Processing, and then paint a picture of the future to give you ideas of what is to come.