Full-Time

Expired

Data Scientist - Machine Learning

Datatonic

London

We're looking for a machine learning expert to unleash the power of data with our customers. You'll be working closely with our partners and customers on the most exciting data projects: product recommender systems, IoT data analysis, segmenting user behaviour profiles with web analytics data, geo-spatial analysis with billions of datapoints and many more.

You will be part of a growing and agile team that has accumulated expertise in, computer vision, recommender systems, NLP and predictive analytics across various business sectors including media, telecommunication, finance and e-commerce. Working closely together with our data engineers you will be helping us to build our next-generation machine learning products.

To be successful, you will need advanced analytic skills to find relationships, models, and statistical associations between massive data sets. You will have natural curiosity to dig into unknowns, unearth insights buried in data, and provide practical conclusions. Furthermore, you will also have the ability to tackle a project on your own and drive its progress from the initial brainstorm to the delivery of a production ready solution.

Requirements

  • BSc or higher in computer science, related STEM or quantitative field
  • Good coding skills in a language suitable for data science and machine learning, e.g. Python, Java, Matlab, R, etc.
  • Experience designing, conducting, analyzing, and interpreting experiments and investigations
  • Excellent problem solving, critical thinking, creativity, organizational, design, and communication skills; ability to interact with all levels of engineers
  • Curiosity about new developments in AI and deep learning
  • Thorough knowledge of numerical optimisation techniques, statistics ML algorithms and neural network architecture (CNNs, LSTMS, GANs)

Bonus Points

  • Familiarity exposing ML components through web services or wrappers (e.g. Flask in python)
  • Experience with handling large datasets (> billions of rows)
  • Some knowledge on scaling computations (Spark, GPUs, …)
  • Familiarity with cloud environments (AWS, GCP, Azure)