Understanding dwell time to improve LinkedIn feed ranking

May 12, 2020

Co-authors: Siddharth Dangi, Johnson Jia, Manas Somaiya, and Ying Xuan

The LinkedIn feed is the cornerstone of the member experience. It’s where our members post ideas, career news, questions, and jobs in an array of formats, including short text, long-form articles, images, and videos. The Feed AI Team’s mission is to help LinkedIn’s members discover the most relevant conversations and content in their feed to help them be more productive and successful. In this post, we explore how understanding our members’ time distribution spent on the feed has helped us improve the algorithms that rank content.

Overview of LinkedIn feed ranking

Let’s dive into an example. When member Alice visits LinkedIn, there are tens of thousands of candidate posts or updates that could potentially show up in her feed. A first-pass, candidate generation layer applies an efficient and lightweight ranking algorithm to identify the top candidate updates to show her. But among these top candidates, how is the ranking fine-tuned to determine the final order? If Alice’s connection Bob recently shared an interesting article, what determines where Bob’s post will appear in Alice’s feed?

We start with the assumption that if Alice were to see Bob’s post and find it to be relevant, she would click on it to engage with the content, the author, or the conversation. Specifically, she may react (“Like”, “Celebrate”, etc.), comment, or re-share—these three options are what we call “viral actions” because they can have downstream and/or upstream network effects. For example, re-share will propagate the article downstream, as Alice’s connections will see the article in their feed. On the other hand, a comment from Alice will have an upstream effect, as it provides valuable feedback to the creator (Bob) that may encourage him to post more often. Therefore, for each candidate update, we need to consider both Alice’s likelihood of engagement, and the potential downstream and upstream effects on her network as a result.

To accomplish this, we train our machine learning models to predict several quantities for each possible click and viral action (click, react, comment, share):

  • P(action) = Probability of Alice taking this action on the update
  • E[downstream clicks/virals | action] = Expected downstream clicks/virals if Alice takes this action
  • E[upstream value | action] = Expected upstream value to Bob if Alice takes this action

The outputs of these models are then synthesized into a single score using a weighted linear combination, where the weights are tuned to ensure that all three components are appropriately balanced in order to maintain a healthy feed ecosystem. Finally, this score is used to perform a point-wise ranking of all the candidate updates.

Why dwell time matters

Note that the ML models used above to generate the final score for each update focus primarily on predicting click- and viral-related quantities. This approach has several shortcomings:

  1. Click and viral actions can be rare, especially for passive consumers of the feed. While these members may still visit the feed frequently and find value in the updates they see, they may shy away from taking click and viral actions.
  2. Click and viral actions are primarily binary indicators of engagement—either you carry out the action or you don’t. For actions related to sharing, the text associated with a comment or re-share (if available) can provide a richer signal, although that signal can be more difficult to interpret.
  3. Clicks are noisy indicators of engagement. For example, a member may click on an article, but quickly close out, realizing it’s not relevant, and return to the feed within a few seconds. We call these “click bounces.”

To compensate for some of these shortcomings, we looked at aggregated per-update dwell time to see if it could help us better improve feed ranking. At a high level, each update viewed on the feed generates two types of dwell time. First, there is dwell time “on the feed,” which starts measuring when at least half of a feed update is visible as a member scrolls through their feed. Second, there is dwell time “after the click,” which is the time spent on content after clicking on an update in the feed.

Aside from a few notable exceptions, we assume that members value their time, and will spend it appropriately on feed content that they’re interested in. With this assumption in mind, dwell time has the following advantages over solely looking at click and viral actions:

Click/Viral Actions Dwell Time
Not always measurable Always measurable
Binary indicator of engagement Real-valued measure of engagement
Noisy indicator of engagement Can be a more reliable indicator of engagement
Positive signals are rather sparse No shortage of signals

Given these advantages, we explored several different methods of incorporating dwell time into our modeling. Below, we take a deep dive into one example where analyzing dwell time data led us to add a new machine learning model that brought significant improvements to feed ranking.

Deep dive: Defining a new concept of “skipped updates”

We analyze members’ dwell time on the feed by computing the empirical CDFs (cumulative distribution functions) of dwell time per update while on mobile. As expected, we observed that members tend to spend more time viewing the updates they decide to take a viral action on.

  • cumulative-distribution-function-of-dwell-time-per-update

Empirical CDFs of dwell time per update on the LinkedIn feed (mobile app)

We also observed a category of updates that members view for a short amount of time without taking any click or viral action on them (as depicted by the green curve in the above graph). In some sense, this isn’t particularly surprising because it reflects the typical way that we consume feed content on our mobile devices—for each update we see, we often make a quick (sometimes subconscious) decision about whether we’re interested in the content. In some cases, we decide not to spend additional time viewing an update, and skip over it to continue scrolling.

To make the notion of a “skipped update” more concrete, we looked for a threshold (Tskip) that would allow us to classify updates viewed for less than Tskip seconds as having been “skipped” by the member. In particular, we set out to estimate P(click/viral action on update | dwell time = T), and specifically investigate if there is a natural threshold Tskip below which this probability is close to 0. Although this probability is difficult to estimate accurately for a specific dwell time = T, we approximated it within certain intervals of time using Bayes’ Theorem and our empirical CDFs:

  • Using-Bayes-Rule-to-express-the-probability-of-an-action-given-dwelltime-T

where F(T) = P(dwelltime < T). Indeed, we identified that such a natural threshold Tskip does exist, suggesting that updates viewed for less than this amount of time are not particularly engaging and tend to be quickly “skipped” by members.

  • graph-displaying-the-probability-of-an-action-based-on-dwell-time

The threshold Tskip for classifying an update as “skipped” is chosen as the
value T where the blue curve, P(action on update | dwell time = T), starts to become non-zero

Interestingly, we found that the value of Tskip was a good choice of threshold for all of the heterogenous types of feed updates (text posts, images, videos, etc.). In other words, although members may have different levels of engagement when they spend more time viewing different types of updates, it seems that they are able to make a “skip or not” decision within Tskip seconds, regardless of the update type.

  • graphs-showing-dwell-time-and-probability-of-skip-across-different-update-types

A single choice of the threshold Tskip works well for all of the heterogenous update types

Incorporating a new P(skip) model into feed ranking

Based on the valuable insights we derived from our members’ dwell time behavior, we incorporated a new “P(skip)” objective into our feed value function that predicts the probability that the member will “skip” an update:

P(skip) = P(member’s dwell time on this update < Tskip secs)

Similar to our existing models for predicting click and viral actions, our P(skip) model is a logistic regression model that uses both raw features and “interaction” features that are automatically learned by training a boosted decision tree model. At a high level, the types of features used by the model include:

  • member-side features (e.g., member’s profile data)
  • update-side features (e.g., number of global clicks and viral actions on this update)
  • member-update features (e.g., member’s historical affinity to posts from the same author)
  • other contextual features (e.g., time of day)

Since we consider a member skipping an update to be a negative outcome, we incorporated the new model into our final ranking function by reducing the score of all updates by an amount proportional to the predicted P(skip) value.

We also looked into adding member dwell-time signals as features into our modeling pipeline. Through numerous experiments, we found that a combination of member-update features (which estimate a member’s interest in content of a certain type based on the count of not-skipped updates) together with update-side features (which estimate the popularity of the update through a similar not-skipped count) provided the most offline metric lifts of the P(skip) model, consistently increasing the area under the ROC curve of the model by as much as 10% over multiple trainings.

To measure the impact of our new P(skip) model and features, we conducted several online A/B experiments on a small percentage of LinkedIn members. Overall, we found the results to be very positive. We saw a large decrease in the number of skipped updates and observed that our members interacted much more with their feed updates through clicks and viral actions. In addition, members noticeably spent more time engaging on the feed, largely due to the addition of the aforementioned dwell-time based features.

These significant metric gains suggest that the introduction of the P(skip) model and the dwell time-based features led to a significant improvement in the quality of the contents shown in feed, and consequently a better feed experience for our members. Based on the results of the A/B experiments and other positive indicators, we ramped our new ranking model to all LinkedIn members.

What’s next?

As demonstrated by the above example, analyzing members’ dwell time led to useful insights that allowed us to directly improve ranking on the LinkedIn feed. However, the work described in this post is only one component of a larger strategy to incorporate dwell time into our AI modeling. Here’s an overview of some of our other efforts in this area:

  • Models. Similar to P(skip), we’re experimenting with new models that can enhance the feed value function, such as P(“long dwell”) = P(dwell time on update > T) and regression models that directly predict the expected dwell time per update. Furthermore, we’re experimenting with methodologies such as Generalized Linear Mixed Effect (GLMix) models to allow us to effectively have a different P(skip) model personalized for each individual member or update.
  • Features. We’re working on engineering other new features related to dwell time, such as learning embeddings for members and updates based on dwell time.
  • Data. We can use a member’s dwell time on an update to modify the weight and/or label of our training data points, which can improve data quality and help align our models to optimize for objectives related to dwell time.

As we explore these areas, we remain cognizant that we don’t want to blindly increase members’ time spent on the feed. We are firm believers that time well spent is better than more time spent. Overall, we will continue working to understand members’ dwell time and incorporate our learnings to improve our members’ day-to-day experiences on the feed.


Many colleagues and teams played a role in this work in one way or another. In particular, we would like to thank Bonnie Barrilleaux, Jenny Wu, Mina Doroud, and Dylan Wang for their helpful discussions, data analysis, and prior efforts on dwell time modeling that helped lay the foundation for this work. In addition, we appreciate the Feed Infra and Data Science teams for helping us measure the impact of our experiments and deliver our improved models to all LinkedIn members. Finally, we are grateful for our fellow Feed AI team members for all the ways in which they have supported this work.