The RIght foundation

Knowledge.

 
 
 

The biomedical big data challenge.

90% of the world's data was produced in the last two years. This rapid increase in data creation is felt most acutely in medical science. The sheer quantity of biomedical information available to scientists is increasing exponentially: from biomedical, genomic, patient, clinical trial and molecular data to consumer genetic data. Add to that the fact that the human body is one of the most complex data systems with over 37 trillion cells. This presents a task that is impossible for even the most learned scientists and research teams to keep up with, let alone process the wealth of information that exists to garner real insight. This is a massive human limitation, but it represents a perfect opportunity for machine learning.

The perfect machine learning opportunity.

BenevolentAI has spent the last five years developing a knowledge pipeline that pulls data from various structured and unstructured biomedical data sources and curates and standardises this knowledge via a data fabric. This is fed into our proprietary knowledge graph which extracts and contextualises the relevant information. The knowledge graph is made up of a vast number of contextualised, machine curated relationships between diseases, genes, drugs and with over 20 types of biomedical entities. 

 
 

Interpret

Constructing large scale biomedical knowledge bases from scratch.

Interpret.png

This is crucial for inferring relationships between biomedical entities. However, when it comes to drug discovery, the scarcity of relevant facts (for example, that gene X is a therapeutic target for disease Y) limist the ability to create a usable knowledge base, either directly or by training a relationship extraction model.

 

Our knowledge graph forms the foundation of our tech components starting with: