Creating a secure and trusted Jobs ecosystem on LinkedIn
January 26, 2021
Co-authors: Sakshi Jain, Grace Tang, Gaurav Vashist, Yu Wang, John Lu, Ravish Chhabra, Shruti Sharma, Dana Tom, and Ranjeet Ranjan
LinkedIn’s vision is to connect every member of the global workforce to economic opportunity. A key driver towards this vision is our world-class hiring marketplace, where we help job seekers find their next dream role and help recruiters find the best fit for their open positions. To be successful, we need to ensure that job seekers on LinkedIn feel confident that every job they see on our platform is real, fair, and safe, and to keep jobs off the platform that are irrelevant, discriminatory, or fraudulent.
To this end, we have designed a robust defense system that proactively identifies and removes jobs in violation of our Jobs Terms and Conditions or our Professional Community Policies outlining our expectations for member and customer behavior on the platform. Every job posted on LinkedIn is evaluated by our defenses, which look for and act upon various types of policy violations. In this blog post, we go deeper into the defenses of member-posted jobs that help to keep our platform safe and trusted.
Multi-layered fraud protection in Jobs ecosystem
Every job published by a member on LinkedIn goes through a series of steps, each of which acts as a checkpoint that hosts a suite of defenses against fraudulent jobs. These checkpoints include:
Creating or accessing a LinkedIn account
Setting up a payment method (optional)
Drafting and publishing the job
Throughout this process, our protections aim to detect abuse as early as possible in the job creation lifecycle. This means identifying a job poster with malicious intent or a bad payment before the fraudulent job posting is even drafted. In cases where abusers evade our proactive defenses, we have systems downstream to identify fraud once we have collected more information. In the next sections, we walk through these key steps for posting a job and the corresponding multi-layered defenses.
Multi-layered defense spread across three checkpoints for protection against fraudulent jobs
Checkpoint 1: Creating or accessing a LinkedIn account
Every job posted by a member begins with the member creating or accessing their LinkedIn account. Our first layer of defense against fraudulent job postings is to make sure that the account trying to post a job, whether it is a Consumer account or an Enterprise account, is legitimate and hasn't been compromised. Ensuring an Enterprise account hasn't been compromised is particularly important because their jobs have broader distribution and may receive increased attention from potential candidates.
We have invested in a suite of automated fake account defense systems that prevent or promptly remove fake accounts created on the site. The defenses range from models that score every registration attempt and challenge risky attempts, to models that detect anomalous member activity to identify fake accounts with high confidence and remove them. These automated defenses blocked or automatically removed 98.4% of the fake accounts that we removed from the platform per our latest Transparency Report.
Enterprise accounts on LinkedIn require a custom contract, and hence are more difficult for bad actors to take advantage of. Once a contract is signed, licensed seats are allocated, and accounts can be linked to the seats. Our contracts do not allow seats to be resold, and we have provisions in place to handle such violations. For example, we can detect contracts that are reselling seats by leveraging characteristics of the relationship between the seat holder and the company associated with the contract. The suspicious contracts are then sent to human review before engaging with the contract owner. This detection is crucial, given the broader distribution of jobs from Enterprise accounts.
We also have protections in place to prevent bad actors from hacking into existing accounts in order to post fraudulent jobs. We proactively invalidate any credentials we believe may have been compromised, to prevent unauthorized access. We deploy a suite of automated defenses to prevent and take down hacked accounts, which closely resembles the defense architecture for fake accounts. For example, models score every login attempt and request further identification proofs for suspicious login attempts. We also have machine-learned models post-login, that are either triggered by a member action or that run in the background using rich graph-based information. They detect compromised accounts on the platform and lock them down for recovery to their original account owners.
We deploy similar defenses to protect against Enterprise account hacks. In addition, to protect these accounts, we have instrumented two factor authentication (2FA) at an enterprise level so Recruiter Admins can now require their contracted Seat Holders to turn on 2FA in order to access Recruiter products. 2FA nearly eliminates the risk of Enterprise accounts being hacked and thus secures the Enterprise contract against unauthorized job postings.
Checkpoint 2: Setting up a payment method
After establishing a LinkedIn account, there is an optional step of providing a payment method to set up a campaign for the job opening. LinkedIn uses market intelligence from payment gateways, as well as rich historical information about the member and the payment instrument, to evaluate if the payment is suspicious. For legitimate recruiters, this step looks straightforward and easy, but behind-the-scenes, we're monitoring for suspicious payments, which are either directly rejected or sent to a review team for further vetting.
Checkpoint 3: Drafting and publishing the job
After establishing the payment method and company page, the next step in the job posting flow is drafting and publishing the job itself. This is our final layer of defense before a job posting goes live on the platform. The defenses in this step are a combination of product controls and risk modeling.
LinkedIn has carefully re-designed the job posting flow to deter bad actors while minimizing friction to legitimate posters. For example, we launched a product feature requiring job posters to confirm the ownership of the corporate email associated with the company on the job posting. This significantly mitigates the risk of fraudulent jobs falsely affiliating to established companies.
Once past applicable product controls, every job posted by a Consumer account is scored by machine learning models that evaluate the likelihood of the job being fraudulent. These models use a large set of features like characteristics of the job poster, job (e.g., contact email, company), and the likelihood of the association between them. Jobs flagged as suspicious by the model are held from going live until they are verified by a human reviewer, allowing us to protect our members.
Detection after job posting
We also continue to assess job postings once they're live on the platform to capture any fraudulent jobs that may have gotten past these prevention mechanisms. First, we have more complex machine learning models that run in the background and queue suspicious jobs for human review. We also listen to reports of suspicious jobs from our 760+ million members and queue these for human review. In parallel, there are numerous fake and hacked account models running online and offline to detect abusive accounts and take them down, which also removes any fraudulent jobs posted by them.
Our team’s highest priority is ensuring that members looking for jobs can trust those posted on LinkedIn. As we prepare new ways for recruiters to post jobs (for example, free jobs), we simultaneously adapt our defenses in terms of both risk modeling and product controls. We have designed our defense layers in a way that both product controls (like email verification) and machine learning models are modular in nature, to easily be scaled and adapted to new job types and to selectively be applied. We are also actively enhancing the performance of various detection models via new classification algorithms, improved signal extraction, and careful curation of model training data. Similarly, we are evaluating ways to strengthen security by product design to increase the friction to bad actors.
While this post specifically covers LinkedIn’s defenses for fraudulent jobs, we also have systems to detect irrelevant jobs, as well as discriminatory jobs, and are constantly working on improving them. We are committed to protecting our members and we look forward to continuing to share our progress.
Building a rich set of product features and machine learning models to provide defense in depth in our jobs ecosystem has truly seen teams across multiple organizations and functions working together. We’d like to acknowledge our partners in the following teams, without whom this would not be possible: Jobs Trust Engineering, Product and Data Science teams from Trust & Talent Solutions Engineering, Trust AI, Trust & Safety, and Legal and Policy.