Scroll for more
tech Feb 7, 2018

Senior DevOps Engineer

BenevolentAI

BenevolentAI is the global leader in the development and application of artificial intelligence (“AI”) for scientific innovation. It is the largest private AI company in Europe and one of the five largest private AI companies in the world. BenevolentAI has built a leading position in artificial intelligence by developing technologies that deliver previously unimaginable scientific advances, rapidly accelerate scientific innovation and completely disrupt traditional methods of scientific discovery. The technology has been validated in drug discovery, specifically, in the most challenging field of human biology: the identification of new disease targets. By amplifying a researchers’ ability to grasp an entire corpus of data and iterate the scientific method at exponentially faster rates, BenevolentAI brings highly advanced tools to traditional R&D programmes enabling artificial intelligence to be applied to the scientific discovery process.

 

The Role

An accomplished Senior DevOps Engineer (or Systems Administrator with a passion for DevOps), possessing a proven track record administering Linux systems both on premises and cloud fleets. Understanding of modern IT architectures and how to best plan for and administer high volumes of data efficiently and securely. Concepts such as AWS/GCP, Kubernetes/Mesos, Infrastructure-as-code and networking should be familiar territory to you.

You will be familiar with containerised build and continuous release environments, as well as care and feeding of the Atlassian/GitLab/Jenkins toolsets. You will be a core team member building and maintaining the underlying infrastructure that support our AI-driven technology. You will be supporting your colleagues responsible for office IT infrastructure and adding your input into diverse areas such as authentication, network topology, sharded databases, scalable web services and interfaces to external data sources and APIs.

 

Requirements

Must have 5+ years experience managing Linux systems, with particularly excellent knowledge of Ubuntu/Debian environments. Shell scripting, using and supporting AWS/GCP, Docker/Kubernetes,  and Terraform must be part of your toolkit.

Desirable experience: deployment and use of monitoring tools; Python; maintaining MySQL/PostgreSQL/Neo4j/other NoSQL or Graph databases; understanding of big data technologies like Spark/EMR; networking; configuring Cisco or other router operating systems. Configuring and using monitoring and alerting solutions (ie., InfluxDB/Grafana/Prometheus/DataDog)

You will be working alongside our autonomous cross-functional squads.  You will advocate high-quality engineering and best-practice in production software as well as providing the infrastructure to both build rapid prototypes and launch production-quality services. You must be a strong communicator who can explain what is required to build and deliver great software products. You will be keen to work with the rest of the team and develop collaboratively.

You will promote test-driven-development and other Agile best-practices for ensuring the software is resilient enough for our scientists to rely upon. Any experience of implementing and monitoring HPC and/or storage infrastructure (including GPU computing) for our AI driven technology will be a distinct advantage.

 

Salary

Competitive salary, depending on experience, annual bonus, share options and comprehensive benefits.

If this challenge and opportunity excites you, please email your CV and a covering letter to here