Fairness

Our approach to building transparent and explainable AI systems

title-card

Co-authors: Parvez Ahammad, Kinjal Basu, Yazhou Cao, Shaunak Chatterjee, David DurfeeSakshi Jain, Nihar Mehta, Varun Mithal, and Jilei Yang

Delivering the best member and customer experiences with a focus on trust is core to everything that we do at LinkedIn. As we continue to build on our Responsible AI program that we recently outlined three months ago, a key part of our work is designing products that provide the right protections, mitigate unintended consequences, and ultimately better serve our members, customers, and society. 

We’ve previously discussed how every system we build needs to be trustworthy, avoid harmful bias, and respect privacy. We’ve also showcased our process for building more inclusive and equitable products and programs to ensure they empower individuals regardless of their background or social status. In this post, we will describe how transparent and explainable AI systems can help verify that these needs have been met. 

How we define “Transparency” as a principle

To us, transparency means that AI system behavior and its related components are understandable, explainable, and interpretable. The goal is that end users of AI—such as LinkedIn employees, customers, and members—can use these insights to understand these systems, suggest improvements, and identify potential problems (should they arise). 

Developing large-scale AI-based systems that are fair and equitable or that protect the users may not be possible if our systems are opaque. We need to understand what is in our datasets, how our algorithms work, and ultimately how that impacts our end users. Transparency, in the context of responsible AI (and, more generally, responsible product design) is an important property that ensures that other engineering or design goals are achievable. Within LinkedIn, we need the ability to have internal, cross-functional conversations about how our systems work, which isn’t possible without this shared understanding.

Transparency + Context = Explainability
Transparency allows modelers, developers, and technical auditors to understand how an AI system works, including how a model is trained and evaluated, what its decision boundaries are, what inputs go into the model, and finally, why it made a specific prediction. This is often also described as “interpretability” in existing research. 

Explainable AI (XAI), a related concept, goes a step further by explaining how a system works or why a particular recommendation was made to members and customers. XAI using causal inferences can also provide them with more actionable insights that they can take to achieve their goals with the help of an AI system (for example, a recruiter expanding their search criteria to include a related skill in order to improve a given candidate selection pool, or suggesting tags to a creator in order to improve the reach of their content).

We wanted to highlight a few key ways we've improved transparency in AI at LinkedIn, including:

  • Explainable AI for model consumers to build trust and augment decision-making

  • Explainable AI for modelers to perform model debugging and improvement

  • Transparency beyond AI systems

Below, we share some details about each of these improvements and how we’ve implemented them at LinkedIn.

End-to-end system explainability to augment trust and decision-making

Predictive machine learning models are widely used at LinkedIn to power recommender systems in different member-facing products such as newsfeed ranking, search, and job recommendations, as well as customer-facing products within sales and marketing. One challenge is to surface the model outputs in an intuitive way to teams not familiar with machine learning. Complex predictive machine learning models often lack transparency, resulting in low trust from these teams despite having high predictive performance. While many model interpretation approaches such as SHAP and LIME return top important features to help interpret model predictions, these top features may not be well-organized or intuitive to these teams. This can result in limited trust in the model and lack of clarity on what action to take based on the model’s prediction.

To deal with this challenge, we developed CrystalCandle (previously called Intellige), a customer-facing model explainer that creates digestible interpretations and insights reflecting the rationale behind model predictions. As of mid-2021, we have integrated CrystalCandle with more than eight business predictive models covering five lines of business at LinkedIn, assisting more than 5,000 employees. 

For sales and marketing models, CrystalCandle is able to interpret and convert non-intuitive machine learning model outputs into customer-friendly narratives which are clear and actionable.

diagram-showing-components-of-the-crystalcandle-explainability-system-at-linkedin

Diagram showing components of the CrystalCandle explainability system at LinkedIn

The diagram above shows that CrystalCandle builds an end-to-end pipeline from machine learning platforms to end-user platforms. It consists of four components: Model Importer, Model Interpreter, Narrative Generator, and Narrative Exporter, where Model Interpreter and Narrative Generator provide CrystalCandle users with an interface for implementing model interpretation approaches and for customizing narrative insights respectively. The entire CrystalCandle product is built on Apache Spark to achieve high computational efficiency and has also been integrated into the ProML pipeline to enable a one-platform solution from model building to narrative serving. 

The above example shows how explainability can build trust for non-AI folks in an AI system within LinkedIn. We have seen that this same approach can lead to improvements in other areas (such as in member trust) where we extended explainable AI to customer service interactions. 

For example, when an anti-abuse classifier flags a member’s account as possibly being the victim of an account takeover, we provide an explanation with the underlying reasons/signals used by the model. This is visible only to the internal customer service representatives and investigation team. This information is very useful to help guide impacted members on how to safely undo the account changes made by an attacker and bring their account back to its original, pre-compromised state. This information also gets leveraged when identifying new attack patterns or debugging model errors. Similar work has been done in combating jobs fraud where the classifier flags the reasons for taking down a job to the human reviewer. This effort resulted in an increase in both reviewer accuracy and efficiency. 

More details on CrystalCandle can be found in this paper.

Explainable AI for modelers to understand their systems

Machine learning engineers at LinkedIn need to understand how their models are making the decisions to identify blindspots and, thereby, opportunities for improvement. For this, we have explainability tools that allow model developers to derive insights and characteristics about their model at a finer granularity. 

For example, a modeler may find that while a recommendation system like Jobs You Might be Interested In (JYMBII) does well globally, there may be pockets belonging to certain industry and seniority levels where the performance is poor due to a smaller number of training data examples to learn from. Our tools allow modelers to automatically identify cohorts where their model is underperforming. We also leverage similar tools to understand the performance gaps across member attributes like gender, in order to increase our models’ effectiveness for all members.

sample-diagram-of-explainability-tools

However, just understanding these finer cohorts may not be good enough. The obvious question follows: “what do we do with that understanding?” We are currently developing an automatic model refining method that can look at the underperforming segments and automatically improve the model. We look forward to sharing more details in future blog posts.   

Transparency beyond AI systems

Everything we build is intended to work as part of a unified system that delivers the best member experience possible. From a holistic “responsible design” perspective, there are many non-AI initiatives that help increase the transparency of our products and experiences.

On the backend, researchers have previously highlighted the importance of dataset documentation as a key enabler of transparency. We use systems such as DataHub to provide detailed documentation of our datasets and Data Sentinel to check and validate datasets for quality.    

On the frontend, we launched a transparency initiative designed to earn and preserve member trust through improvements to our reporting and policy enforcement experiences. The improved experience provides both reporters and content authors with increased transparency through every step in the content moderation funnel. 

We plan to introduce more initiatives to educate our members about the design of our feed, messaging systems, and revenue products. There is still a long way to go and as we continue on this journey, we’ll work to demystify the behavior of large-scale AI systems, because we believe every step we make towards transparency is a step in the right direction.  

Acknowledgments

We would like to acknowledge Igor Perisic, Romer Rosales, Ya Xu, Sofus Macskássy, and Ram Swaminathan for their leadership in Responsible AI. We would also like to acknowledge the work of the ProML and ProML Relevance Explains teams (especially Eing Ong, Yucheng Qian, Shannon Bain, and Joshua Hartmann). 

We would like to thank all of the contributors and users who assisted with CrystalCandle from the Data Science Applied Research team (Diana Negoescu, Saad Eddin Al Orjany, Rachit Arora), the Data Science Go-to-Market team (Harry Shah, Yu Liu, Fangfang Tan, Jiang Zhu, Jimmy Wong, Jessica Li, Jiaxing Huang, Suvendu Jena, Yingxi Yu, Rahul Todkar), the Insights team (Ying Zhou, Rodrigo Aramayo, William Ernster, Eric Anderson, Nisha Rao, Angel Tramontin, Zean Ng), the Merlin team (Kunal Chopra, Durgam Vahia, Ishita Shah), the Data Science Productivity Team (Juanyan Li), and many others (particularly Tiger Zhang, Wei Di, Sean Huang, and Burcu Baran).

Finally, we would like to thank all of the early adopters and users of our explainable modeling systems, including the Salary AI, Learning AI, and JYMBII teams.