Machine Learning Infrastructure
Machine Learning Infrastructure
What is the point of learning if you don't apply the learning to change yourself and the world? In partnership with AI and our sister teams in AI Infrastructure, the Machine Learning Infrastructure facilitates the robust, efficient, and straightforward application of machine learned capability to LinkedIn's mission.
We are one of the core teams behind the Productive Machine Learning (Pro-ML) initiative, bringing fast end-to-end model and feature goodness to LinkedIn Engineering.
Deep Learning Authoring and Inference Infra
We are a team that provides robust, scalable and easy-to-use framework and solutions that empower AI model training, scoring and ranking services across LinkedIn’s deep learning use-cases and product lines. Our technology stack spans offline, nearline and online environments and leverages state-of-art open source technologies.
Model Automation and Creation
The mission of Model Automation & Creation Infra team is to provide state-of-the-art modeling pipeline tools to automate advanced AI model creation at LinkedIn. The flexible and scalable model automation product—Photon-Connect (PC)—enables engineers/modelers to try out different AI/ML methods quickly and become more productive in the model creation process. Our solutions solve challenges in machine learning through large-scale distributed computation on different hardware configurations (e.g., CPU & GPU), composing different classes of machine learning models (e.g., generalized linear models (GLM), personalized linear models GLMix, boosting trees, and deep learning neural networks), and automating the end-to-end industrial machine learning process.
Model and Feature Productionalization
Productionalization is the process of allowing a trained model and all its dependencies (including features) to be ready for use in production. It represents the entire timeline from when a model is finished training, until application code can use the model. We partner closely with various teams in AI, Storage, Streams, and Big Data to build a platform that allows AI engineers to easily productionalize and monitor the health of their models.
Feature Access & Representation
Feature preparation is often tedious and is responsible for a large part of machine learning systems' complexity. Frame team’s goal is to change that, making machine learning easier by providing 1) a common feature namespace and 2) tools and systems to enable easy access to Frame features for training and scoring with guaranteed consistency across online, offline, and streaming environments.
Feature Generation & Knowledge Infra
We are a team that provides large-scale, cross-platform data processing platform/framework for ML feature generation and AI-powered derived data generation and serving. For ML feature generation, the team focuses on nearline feature generation framework for building feature pipelines in production environment. For derived data generation and serving, the team provides framework/platform, tools and support infrastructure for AI teams to build their own workflows for different types of content, but is extensible to allow for specialization for specific content, including the professional knowledge base of LinkedIn’s Economic Graph. The ultimate goal of the team is to enable faster development and iteration for computing and serving ML features and derived data.
Relevance Explains (REx)
REx (a.k.a “prescriptions for relevance models”) builds tools to troubleshoot online AI models — providing AI and AI Infra teams with observability and interpretability of AI models in production, enabling them to efficiently identify and fix data, model, and code issues, as well as explain model behavior. We also build a set of tools powering AI productivity across domains like the ML workspace, our AI metadata, and AI productivity metrics monitoring.