Machine Learning Infra

The Machine Learning Infra team works in partnership with AI and sister teams in AI infrastructure. We facilitate robust, efficient, straightforward application of machine learning capabilities to LinkedIn’s mission. 

We are one of the core teams behind the Productive Machine Learning (Pro-ML) initiative, bringing fast end-to-end model and feature goodness to LinkedIn engineering. 

Our major projects include:

  • Health Assurance: Aims to provide a lightweight, automated and efficient way to validate & monitor models at inference time as well as catch and raise the visibility of problems as early as possible. It is a key pillar of Productive Machine Learning (Pro-ML)

  • Model Deployment: The process of allowing a trained model to be ready for use in production. It represents the entire timeline from when a model is finished training, until application code can use the model. Pro-ML model deployment was designed and built to provide a common process and set of tools to allow Pro-ML engineers to easily deploy their models and monitor the health of each model. Pro-ML model deployment provides a suite of tools that enable users to create, track and deploy (or undeploy) model artifacts to production.

  • Model Cloud: Serving models for point/batch inference is inarguably operations heavy and requires teams to build/integrate with the ProML’s serving ecosystem. While this adds complexity at the application layer, it requires non-trivial engineering investment for each team and consequently prevents creating leverage. To address this problem, we are building a managed model serving solution (Model Cloud) for online inference

Back