Responsible AI

An update on Responsible AI at LinkedIn

Co-authors: Parvez Ahammad, Kinjal Basu, Shaunak Chatterjee, Sakshi Jain, Ryan Rogers, and Guillaume Saint-Jacques

At LinkedIn, our guiding principle is “Members First.” It ensures we honor our responsibility to protect our members and maintain their trust in every decision we make, and puts their interests first. A key area where we apply this value in engineering is within our design process. We call this “responsible design,” which means that everything we build is intended to work as part of a unified system that delivers the best member experience, provides the right protections for our members and customers, and mitigates any unintended consequences in our products. 

One of the core pillars of “responsible design” is “responsible AI,” which follows Microsoft’s Responsible AI Principles. The six values that we build into our products include fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. They are designed to put people first, and enable us to empower our members, better serve our customers, and benefit society. 

In addition to the six values, responsible AI is also about intent and impact:

  • Intent involves evaluating training data, designing systems, and reviewing model performance before the model is ever deployed to production to make sure that our principles are reflected at every step in the process. It includes actively changing our products and algorithms to empower every member.

  • Impact covers detecting and monitoring the ways that people interact with products and features after they are deployed. We do this by measuring whether they provide significant value and empower individuals to reach their goals.

Intent and impact are a cyclical process of refinement that go hand-in-hand towards the broader goal of responsible design. 

Responsible AI in action

Responsible AI isn’t new to LinkedIn. In fact, it’s a well-established principle that has informed our engineering and product development process for years. Designing, building, and testing a new product feature at LinkedIn involves many stages and anywhere from dozens to hundreds of people can play a role in shaping the initial or subsequent releases. 

Here are a few examples of how we’ve applied the principles of Fairness and Privacy to our development process. 

Fairness
Simply put, AI systems should treat all people fairly. To make this principle actionable, we look for opportunities where this value aligns with LinkedIn’s unique position as a platform for economic opportunity, our portfolio of AI-powered products, and our company’s vision for the global workforce. 

For example, starting in 2017, our fairness work in AI was focused on ensuring that “two members of equal talent should have equal access to opportunities.” We embarked on this process based on conversations with our customers who wanted help eliminating implicit bias from their hiring practices, and recognized the need to adopt a “diversity by design” principle in our product design process. This led to us building representative talent search into LinkedIn Recruiter

Since then, we have scaled our efforts across our portfolio of AI-powered products and developed and open-sourced the LinkedIn Fairness Toolkit (LiFT), to address bias in large-scale AI applications. LiFT not only helps in measuring fairness but also has bias mitigation strategies that can be applied in large-scale AI systems. In the last year, we continued developing newer mitigation strategies, novel frameworks for fairness in a marketplace setting, as well an interpretable assessment of fairness. We will continue our work to make these readily available through LiFT in the near future. Our work in this space continues to evolve as a result of newly-published research, access to better data, and the evolution of our company’s own vision (for example, our commitment to encourage a more equitable economy). 

What’s more, in our commitment to build products and programs responsibly to empower individuals regardless of their background or social status, we introduced Project Every Member—a set of A/B testing tools that detect inequality impact both in terms of specific groups (like social capital level) and in terms of concentration of opportunities and resources. We open-sourced this tool in order to help every company build more inclusive products, shared our learnings from using this tool internally, and have used these tools to automatically analyze all AI experiments over the past two years. 

Privacy
Member privacy is a priority at LinkedIn and a key consideration in product design. We aim to protect member privacy by using innovative privacy engineering techniques such as differential privacy to provide valuable insights to members and policy makers.

Differential privacy enables us to release aggregated data insights without compromising member privacy in the dataset. For example, when the world was in the midst of the COVID-19 pandemic, we wanted to help governments, policy makers, and individuals in the global workforce make informed decisions about hiring trends. Differential privacy enabled us to show them the top jobs, employers, and skills of those that were hired so they could understand where opportunities existed in an aggregated and privacy-protective manner.

The differential privacy team at LinkedIn has developed new algorithms that work with existing real time data analytics systems, allowing us to develop systems at scale while protecting member privacy. We have applied our differential privacy system to applications in the Economic Graph and Labor Market Insights

The road ahead

For the past few years, we have applied our responsible AI principles across what we built at LinkedIn because it’s the right thing to do for our 756M members. We are building on our past efforts in the areas of privacy and fairness, and continue to learn more every day. In future updates, we look forward to sharing more of the exciting work we are doing in areas like reliability and safety, security, inclusiveness, transparency, accountability, and privacy-preserving AI frameworks (such as federated machine learning on encrypted data). We look forward to sharing more updates in the months to come. 

Acknowledgements

The authors would like to acknowledge Igor Perisic, Romer Rosales, Ya Xu, Sofus Macskássy, and Ram Swaminathan for their leadership in Responsible AI. We would also like to acknowledge the entire Data Science Applied Research, and Responsible AI teams for their contributions to our past and future work, as well as the members of several AI and DS teams who have contributed to the LinkedIn Fairness Toolkit and other Fairness efforts.

Thank you to our coworkers on the Social Impact team, our Product teams, the Equity team, Policy, Legal, and other Engineering teams for their feedback and active engagement. We would especially like to acknowledge Meg Garlinghouse, Daniel Tweed-Kent, and Bef Ayenew for their leadership and continued advocacy, as well as Kalinda Raina, Catalin Cosovanu, Jen Ramos, and Jon Adams for their guidance. Thank you to Fred Han and Stephen Lynch, without whom this blog post would not have been possible.