The technology behind fighting harassment on LinkedIn

Co-authors: Grace Tang, Pavan K. Ganganahalli MarulappaMontinee Khunvirojpanich, and Ting Chen

LinkedIn is an active professional community where our members come to stay informed, build meaningful relationships, and find a job. In order for members to confidently engage in this community, they have to feel safe. This sense of safety is at risk when spam, inappropriate, or harassing content is shared on the platform. This content is not tolerated on LinkedIn, and we have rolled out proactive and reactive measures that employ a combination of technology and human expertise to protect members.

We have clear policies and practices and also always listen to our members to increase our understanding of the experiences that could erode a member’s sense of safety on the platform. While we’re always refining and improving the tools we use to prevent or quickly stop harassment, we encourage members to report negative experiences and we actively review the reports and track the trends. We find that reported cases of harassment predominantly stem from private messages rather than the public feed. This finding has led to a series of initiatives and projects to better protect our members against harassment in messaging. We work to protect members from any type of harassment, this blog focuses specifically on sexual harassment given the nuances to understanding and solving this problem. 

Harassment in messaging

Combating harassment in private messages is especially important as it feels targeted, leading to an acute loss of safety for the victim. To begin, a particular challenge in approaching this work was that the term “harassment” is broad in scope. Every member’s experience with harassment is unique and personal to them.

Strategy
To combat harassment in the different forms it takes, we quickly act on member reports, but find certain harassment, such as unwanted advances, are under-reported. Our research conducted with our members reveals:

  • The sense of being targeted prompts some members to simply block violating members to make the problem “go away”, rather than reporting the message for our teams to action. 
  • The nature of one-on-one private messaging on social networks can introduce a fear of retribution when one user reports another for inappropriate behavior. This concern persists even though LinkedIn never shares messaging reports with the offending member.

Given the underreported nature of harassment and the potential severity of its impact, we have the following strategy to proactively minimize harassment on the platform while ensuring a supportive member experience.

  1. Educate and enforce clear policies: We educate members about our Professional Community Policies and enforce them when we find harassment in messages. 
  2. Detect harassment: We are deploying machine learning models to detect potential harassment within messaging. These models work to protect the recipient by hiding potentially harassing messages while also giving the recipient the ability to un-hide, view it and optionally report it. We go into more detail on this work below.
  3. Support affected members: We shared tips on the options members have when they experience harassment. We send 100% of the harassment reports for review and will soon be closing the loop with reporting members to provide more transparency on the action taken.

Detecting harassment

Along our journey designing a safe, trusted and professional platform, we have found that the behavior of members who send sexually harassing messages are generally grouped into three categories:

  • Romance Scams: Members who carry out financial scams through fake or hacked accounts using romantic messaging to defraud a member. This type of behavior is typically found in suspicious account signals that our fake account and hacked account defenses are designed to stop.
  • Inappropriate Advances: LinkedIn is not a dating website, but some members choose to inappropriately solicit other members for romantic purposes. These members send multiple messages soliciting relationships to members they often don’t know. We address this population with machine learning designed to detect this behavior.
  • Targeted Harassment: This includes bringing an off-platform conversation or dispute onto LinkedIn, such as stalking or trolling. These violations are less common and may originate from fake accounts or real members. We plan to adapt our modeling solution for Inappropriate Advances to address this population.

When members report content as harassment, it prompts our teams to look into the specific case and review the offending content. To address Inappropriate Advances, we individually studied the abuse from those reported cases at the sender behavior level, message content level, and (sender, recipient) level. Using that data, we then built a machine learning harassment detection system consisting of a sequence of three models that together identify the violating members and their harassing messages with high precision.

  1. First, sender behavior (e.g., site usage, invitations sent) is scored by a behavior model. This model is trained using members that were confirmed to have conducted harassment (surfaced via member reports).
  2. Second, content from the message is scored by a message model. This model is trained using messages that have been reported as and confirmed to be harassment. 
  3. Finally, the interaction between the two members in the conversation (e.g., how often do they respond to one another, are most of the messages predicted to be harassment by the message model) is scored by an interaction model. This model is trained using signals from the conversations resulting in harassment. 

We apply these models in a sequence to minimize unnecessary account or message analysis by not proceeding with additional model scoring unless the previous model flags the traffic as suspicious. At the end, this harassment detection system triggers a recently launched feature, which hides messages detected as harassing and provides the option for recipients to un-hide and easily report them.

diagram-showing-the-scoring-models-for-detecting-harassment

Looking ahead

Detecting and mitigating harassment on LinkedIn is a top priority for our team. While there is no perfect solution to addressing this challenging problem, we are working every day to evolve and build out our strategy to minimize harassment on LinkedIn and its impact on members. Towards this end, we are actively enhancing the performance of the harassment detection system (model precision and recall) through new modeling techniques, more sophisticated training data selection, and enhanced feature engineering. In parallel, we are exploring new product experiences to alleviate the negative experience of harassment in cases where we are not able to proactively prevent it. As we work to address safety in other areas of the member experience, in the coming months we will be closing the loop with members about what we do with their reports. We are committed to protecting our members and we look forward to continuing to share our progress. 

Acknowledgements

Launching a machine learning solution to detect unwanted romantic advances has been a cross-functional effort. Throughout this journey, we’d like to acknowledge our partners on the following teams, without whom this would not have been possible: Trust & Safety teams in Policy & Content Enforcement, Trust and Messaging Engineering, Trust and Messaging Product, Trust AI, Trust Data Science, Legal, and Information Security.