Engineering LinkedIn Reactions
April 17, 2019
As you casually scroll through a news feed, you may “like” a post here and there. “Liking” has become so second-nature that we don’t often think about what happens the minute you hit that “like” button. When we began considering building out our “likes” feature into a set of reactions on LinkedIn, it took time for me to understand how complex a “like” really is. Understanding these complexities was a prerequisite for building out our new Reactions feature, which allows for more expressive social actions on LinkedIn.
LinkedIn’s New Reactions
On LinkedIn, you can expect a “like” button to appear on every post in the feed, in every context in which a post appears. You can expect it to maintain state wherever it’s seen. You can expect it to notify the author that you liked the post, and you can expect posts with more engagement to rank higher in your feed.
The architectural reach of a “like” in the LinkedIn ecosystem cannot be overstated. “Likes” flow through several services involving dozens of code repositories across multiple different apps. There are 27 development teams across LinkedIn with a stake in “likes.” Although the member experience may appear straightforward, the technology behind it is not.
Given this complexity, we had to develop strategies and cross-team collaboration channels in order to tackle the challenge of building out “likes” into reactions:
First, we had to reduce a complex engineering and UX problem to its foundational use case: reactions on feed posts. Once established, we used the foundational technical architecture and user experience with reactions as the platform to inform how it scales to the various consumers and use cases of the feature.
Next, in order to execute on a complex, multi-site engineering project, we had to rely on some of our core company values to effectively manage the feature.
The backend platform: Reactions data model
As a first step, we created a data model to represent reactions. Conceptually, reactions are composed of three components:
An actor: Who created the reaction?
An object: What is the entity that was reacted to?
A verb: Which reaction was selected?
Within each of these components, we had to determine how much flexibility to design for:
Although every reaction will require an actor, there are several possible types of actors (i.e., members, companies, schools, etc.). Therefore, this field should be general enough that various actor types are supported.
Similarly, every reaction requires an entity. We currently only support reacting to posts and articles, but the model should not limit the types of entities to which a member can react. In the future, we may want to support reacting to comments, messages, or jobs, etc.
The reaction type verb required deeper consideration. How much flexibility should we allow in terms of the types of reactions supported?
There are three questions that drove this consideration.
How much experimentation will we want to do with the set of Reactions?
Do we want to support one-off reactions (e.g., an icon added specifically for a holiday)?
What do we need to do to maintain legacy “likes” for backwards compatibility, while still including them as a part of the new reactions set?
The first two questions were answered in collaboration with our product partners. We determined that we wanted to support controlled experimentation—experimentation within a predefined set of reaction types. When doing our initial brainstorm of what reactions to include in the set, it became clear that there was a finite number of broad signals we would consider as a reaction. Also, we front-loaded the experimentation without needing to build into the product; for example, we hosted a series of user studies to understand how members interpret a variety of reaction types. By the time we decided which types to explore in the product experience, it was clear that we had narrowed the set to 10 possible options.
We also determined that we did not want to support one-off reactions. Building a new reaction type has implications for the ecosystem in which our relevance model would have to understand how to interpret a new signal, and data analytics would have to update the tracking they expect for a new signal. Once we introduced a bespoke reaction, we would have to support that type of reaction forever. This has implications for maintaining the consumption experience for that icon and maintaining it in the design systems library, even as the design frameworks evolve. Given these significant ramifications, we determined that it would be too expensive to support temporary reactions.
Once we had these questions answered, we consolidated a list of 10 reaction types to be supported by the platform. The reaction type field would only accept the value of one of these 10 predetermined types, while the UI would only display five of this broader set of 10 for a member to select. This way, we could display any five of the 10 reaction types for experimentation as we finalized the set that we wanted to launch with. Knowing that we would not support one-off reactions, we were able to explicitly define these reaction types as fixed values in the data model. To learn more about the design journey in creating these reactions, please see more details here.
So what about legacy “likes”? The reactions feature was bootstrapped with a data migration, in which “likes” were copied from an existing service to a new one that was spun up to store reactions. This enabled us to conceptualize “likes” as a type of reaction in the new database while temporarily supporting legacy “like” objects as clients iteratively migrated to the new one.
The database migration introduced significant challenges. As a prerequisite to launching reactions, we first had to migrate all “likes” traffic to the new database. “Likes” traffic is skewed in the sense that certain posts are really viral, so a small subset of posts drive a disproportionately high percentage of reads and writes against the database. Because we made changes to the data structures that introduced a different pattern for reading and writing reactions, the hardware alone could no longer handle these spikes in traffic without significant degradation to the user experience. To solve this problem, we introduced a caching layer to keep popular posts in memory. This improved our read throughput on these popular posts by serving requests from memory rather than disk. This is just a single example among many challenges with creating the data platform for reactions.
The client platform: Reactions desktop web components
It turned out that the data model was not the only thing that needed to act as a platform; so did the UI. Members can currently react to posts on 11 different pages (including feed, search, articles, etc.), and this does not include future iterations that might introduce reactions in other contexts.
We had to figure out a way to build the “reaction” button and reactions menu so that they could be leveraged throughout the app. To make this more tangible, let’s use desktop web as an example.
The “like” button is relatively simple: it’s one button with two actions and two states. Members can “like” or “unlike” something, and therefore, the state of the button can be liked or not. It was too easy for teams to build their own “like” button, so the “like” button came in all different shapes and sizes depending on where it was in the app.
When we built the reactions platform, we wanted it to replace all of the existing implementations of the “like” button to reduce tech debt and to achieve a more consistent user experience. It needed to be standardized wherever it exists: the icons needed to be consistent, the animations for the menu needed to be maintained, the network requests had to include all CRUD operations, etc.
It was a challenge to figure out how to build reactions, but another challenge arose when determining where to build them in order to make reactions a platform. In our LinkedIn desktop web repository, each product maps to a package, and the goal of each package is to consolidate the set of files that implement that product. It pays to create small packages so that dependencies can remain lean.
There were two options for us to package reactions:
Create a new package that included all the reactions components: the button, the counts that depict reactions, the menu, and the modal that displays the list of reactors.
Create two new packages: one that included reactions creation components only and one that included the counts and reactor list.
We decided to go with the second approach. Conceptually, the creation experience involves interacting with the button to trigger the menu, selecting a reaction, and updating the button state. The consumption experience, however, involves clicking on the count of total reactors to render the modal that displays the list of reactors. Considering all the different creation and consumption experiences, the member experience for consumption differs based on the context. Partner teams could opt into leveraging the existing creation experience and build their own consumption experience since the packages were separated.
Leveraging Company Values for Cross-Team Collaboration
Although I highlighted two examples for creating the platform, building out the platform required overcoming several more challenges and designing solutions across the stack. Once we overcame all those hurdles and established the platform, it had to be put to the test. Here is a glimpse into the questions we had to answer in order to introduce reactions:
How can companies react to posts?
How can we support real-time reactions during live videos?
How can we provide analytics for reactions to sponsored updates?
How do reactions affect distribution, relevance, and virality of posts in the feed?
What types of notifications should be triggered by a reaction? In-app? Push? Email?
How do external API partners interact with reactions?
Each of these questions involved one or more other teams that we collaborated with to ensure that reactions would work similarly to their current “like” member experience. While some of these questions were relatively straightforward to resolve, others were so complex that an entire blog post could be written just about the how we added reactions to those use cases. Overall, in order to be successful, this collaboration model required the use of two of LinkedIn’s core cultural values: Members First and Act Like an Owner.
When two teams collaborate, each team brings a unique piece of the puzzle to the project. In the case of reactions, our team had an understanding of the reactions feature, whereas the partner team had an understanding of how “likes” impacted their existing member experience. In order to negotiate what changes were required for reactions, we had to anchor the conversation in the members’ experience.
The more the engineers involved understood the product, the easier it was to work with them to maintain a consistent member experience across LinkedIn. For example, if we display the counts on a post that sum up all the reactions, then we need to display the counts consistently in all contexts. Otherwise, it confuses our members to see a “like” count on notifications, a reactions count on the feed, and other count inconsistencies. By explaining the reactions member experience, partners were better equipped to evaluate how this affects the member experience. Then, together we could execute on a shared goal of establishing consistency in the reactions experience for our members.
Act Like an Owner
A core principle for implementing reactions was “one team.” Although there was no single team formed to implement every aspect of the reactions project, each team took complete ownership over the contributions they made.
Not only did each team own certain changes that needed to be made for reactions, but they also owned verifying the changes and debugging any issues resulting from them. This sense of ownership was critical for collaborating across teams because we could act as one team even though we were in different locations or from different reporting lines. We could count on every engineer to step in and provide their specialized perspective as we continued to meet new challenges over the course of the project.
Building out reactions was not the work of one team under a single organization within the company, but rather the work of dozens of teams. Although the project required varying levels of commitment from each team, the project would not have been launch-ready without every team’s contribution. If a single team opted out of their commitment to the project, that would represent a member experience left behind.
I’d like to acknowledge the people whose work made this feature possible. Thank you to the entire development team:
Ricardo Rivera, Cissy Chen, Lee Mallabone, Woye Lin, Zoe Ingram, Shipra Jain, Adela Gao, Madhura Deo, Tao Ning, Wei Cao, Aaron Chen, Kevin Morgan, Peter Shatara, Sagar Raut, Tiffany Huang, Rashi Agarwal, Chintzia Torrente, Kurt Mcculloch, Nate Whitson, Karolina Sadocha, Adam Miller, Daniel Hagman, Bryan Levay, Amy Huang, Zhiyuan Zou, Navneet Saini Singh, Hassan Khan, Xin Hu, Jack Li, Shun Yao, Angela Shao, Youngchae Kim, Brett Konold, Tim Chao, Yeonhoo Park, Tony Lai, Ning Luo, Poojan Shah, Yuan Sun, James Hung, Boyi Chen, Claudia Hinkle, Sean Johnson, Chris Ng, Justin Peterman, Ellis Weng, Kevin Di, Harleen Serai, Brent Dimapilis, Ryan Downing, Siddhesh Gandhi, Mark Dietz, Shuya Wang, Chris Winzenburg, Kailun Shi, Saurabh Agarwal, Sudheendra Chari, Nikhil Chandna, Gaurav Maheshwari, Niranjan Sharma, Sandesh Karkera, Hari Prasanna, Maria Iu, Kevin Arcara, Joe Farquharson, My Trinh, Yeonhoo Park, Heidi Wang, Pete Davies, Rebecca Chu, George Penston, Jessica Kahn, and Prachi Gupta.