Search

Building a more intuitive and streamlined search experience

Co-authors: Rashmi Jain and Sonali Bhadra

Search on LinkedIn is an essential part of our members’ experience, with more than 44% of members who visit the site each week performing at least one search. We’re continually working to improve that experience and recently introduced a new streamlined and intuitive search experience that brings together people, jobs, groups, companies, courses, and more in a more discoverable manner. Since ramping, we’ve seen strong, tangible signs that the overhaul is helping members connect to more communities, opportunities, and resources. Meanwhile, on the backend, we’ve removed major roadblocks for our engineers, reducing the time it takes to build and ramp a new search entity type from 21 weeks to 1 week.   

In this blog post, we’ll detail our journey of rebuilding search, explain how we prioritized both the member and engineer experience, and share what we learned along the way. Our challenge as an engineering team was how to rethink our search infrastructure in a way that would enable our members to better connect with the opportunities, resources, and communities on LinkedIn, while also empowering our team internally by removing the roadblocks that sprung up as we grew to nearly 740 million members. 

Setting the scene

During our last search rewrite in 2015, we prioritized simplification and development speed through continuous integration and deployment when it came to our tech stack. Since then, we’ve seen strong member adoption within search on a number of fronts. However, we started to run into challenges, such as inconsistent and non-intuitive experiences as well as high turnaround time for adding new searchable entities. Onboarding a new use case to search was a cumbersome process across the stack and required entity owners to understand search’s domain-specific architectures and integrate their data into multiple layers of the search ecosystem manually, resulting in huge bottlenecks and iterations. 

In addition, our frontend architecture for search had several limitations when it came to scaling new features and experimentation velocity. This was due to a lack of standardization in the code architecture and duplicate efforts across clients (mobile, desktop) causing significant increase of app size. These architectural problems varied in nature and intensity between different platforms. 

As we looked ahead, we needed next-generation frontend technology that could scale to serve search’s growing product needs, built on the following guiding principles: architectural consistency; separation of concerns; efficiency; and developer productivity.

We also knew we needed to become more nimble, especially in regards to the development of new search entities and experiences. It previously took around 21 weeks to onboard a new entity for search. That was unacceptable, and we set our goal of an onboarding time of < 1 week for new search entities. 

Our approach 

We set clearly defined design principles focused on the member experience and product guidelines to keep the team focused. However, for the purpose of this blog, we’ll focus on the developmental categories we kept in mind as we built a new search. This helped us ensure high craftsmanship, scalability of engineering design, debuggability of code, and understanding of metrics. 

  • Commitment to craftsmanship, including a consistent architecture across platforms 

  • Testable and debuggable platform, including a unit-to-scenario test coverage ratio of 70%

  • An accessible and localized experience

  • Secured and trusted, with built-in security for each layer

  • A fast experience that didn’t sacrifice speed 

The LinkedIn search ecosystem

It’s also important to grasp the complexity of the LinkedIn search ecosystem, which consists of multiple layers, each tasked with a specific job as described below. To enable faster onboarding of new entities to the search ecosystem, a platformization effort was planned for each layer. We’ll focus on the top layer in this blog post, namely the frontend clients and the frontend API, in addition to our improvements in tracking coverage, which enables the machine learning pipelines to have robust data for experimentation.

  • Flagship Search frontend clients and API: Web, iOS, and Android clients interacting with members and displaying search results, and the API responsible for routing requests to the correct backends and stitching together the responses for decoration and presentation of search results and suggestions.

  • Search midtier: Includes a federator responsible for scatter-gather of different result types and a machine learning pipeline responsible for ranking and blending the result types.

  • Search backend: Includes offline and online indexes that power the search engine and machine learning pipelines responsible for maximizing successful searches based on intent.
linkedin-search-ecosystem-architecture-including-frontend-mid-tier-and-backend

LinkedIn Search ecosystem architecture 

Rewriting the frontend clients and API 

As we turned our attention to rewriting the frontend clients and APIs, our engineering goal was clear: 

  1. Develop a set of reusable UI building blocks and an extendable endpoint to maximize leverage, achieve a consistent search experience, and minimize onboarding time.

  2. Adopt render models over data models for faster product iteration.

  3. Comprehensive tracking coverage covering all UI elements shown to members and their interactions with them, to have high quality data for training ML models and create new, feature-specific metrics in the data science pipeline.

For the mobile clients rewrite, we started with a code base that had a fragmented architecture, with code/patterns not being widely shared across the app. This fragmentation increased review, build, and test time, and caused app size bloat and flaky tests. To improve the situation, we standardized a consistent app architecture and set clear patterns for code sharing, data/view layer separation, and testing, all while being space and time efficient. 

We also made standardizing the results card framework a priority. Many pages on LinkedIn, both for the browser and app, need to show a list of entity results in a page view, where each entity looks similar to the format shown below. Prior to the rewrite, different teams owning these different pages would implement their own custom models, endpoints, API, and UI. As expected, we ended up with an inconsistent member and developer experience across teams, products, and platforms, sometimes even resulting in different implementations on the same page. It’s no wonder that it previously took 21 weeks to build and onboard new entity types, given the need for custom code in the frontend API as well as all 3 clients. 

For the rewrite, we adopted a modelling approach known as render models, where the API should be close to the view rendered on clients and the backend controls what data should be populated in the views. This allows us to easily onboard new entity types to the search ecosystem by just making API formatter changes, with the clients rendering these formatted data out of the box. As expected, it dramatically increased the velocity of our ability to iterate and create new entities, while reducing the conversion overhead from data to view on clients, reducing the app size and making clients more efficient. It also enabled us to do fast experimentation moving forward.

In the below image, we defined extensible templates containing slots for each UI element, which act as the contracts between the frontend clients and the frontend API. If used correctly, we could support rendering the UI components out of the box, with no changes required on clients. The framework also provides hooks for customizations like custom insights and custom actions, where the onboarding use case can define the new models for customizations and clients have to do some work to support these.

defined-extensible-templates-containing-slots-for-each-ui-element

The other primary action we took was building a reusable search framework, which prewired all the UX components with a common API endpoint to serve results to clients in the form of render models. It has off-the-shelf support for error handling, No Result page, accessibility, testing, standardized tracking, monitoring, and UMP metric reporting. To build this new common endpoint, we had to rewrite three major components: 

  1. Query formulation: We defined a generic SearchQuery model, which serves current keyword searches and can be easily extensible to add new query parameters to support complex use cases or new backends like graph search.

  2. Query routing: Each use case is assigned a queryIntent value, which is used in query formulation, and the reusable search API uses it to route the request to the correct use case handler.

  3. Serving render models: Render models form the contract between the frontend clients and frontend API, shielding all three clients (web, iOS, and Android) from backend changes and increasing feature iteration velocity.

With reusable search, one of the greatest advantages is that partner teams requiring similar UX components can build a complete search functionality and productionalize on all three clients within a week, leading to a clean, consistent and intuitive product experience across platforms.

Laying the foundation for relevancy improvements

Before the redesign, the blended search results page used to consist of one type of “primary” results (often people) displayed in a list, and a limited number of carousel “secondary” clusters inserted into the list of results. As we onboarded more and more result types onto search, and as the LinkedIn ecosystem matured beyond people search, it was no longer working well. There were a limited number of secondary clusters we were allowed to have, which limited some very relevant result clusters from showing up. The separate models to determine the “primary” list and “secondary” clusters introduced additional complexity when we onboarded new result types, as they had to be onboarded to separate models and workflows.  

Instead of having secondary clusters inserted into the primary list, we created a new experience that can support a large number of clusters and make it easy to scan and consume. This redesign laid out the foundation for our new unified machine learning model that is able to serve the evolving needs of our members. 

Additionally, we have designed and implemented a comprehensive tracking system that integrates the new search platform, which makes available high quality impression and interaction data for every part of the search experience. This has enabled our Search AI team to leverage the rich data to improve models, and our data science team to create new metrics to measure success. 

Results

The combination of results card and reusable search frameworks helped us reduce onboarding time by 95% for new searchable entity types, going from 21 weeks to 1 week. This had a major impact on developer productivity and our ability to build a better, more consistent, and continually improving experience for members. 

Meanwhile, the improved, streamlined search experience led to more members making more connections, finding more jobs they want to apply to, and connecting with companies and communities they’re interested in. In fact, we’ve seen a double digit percentage increase in job applications and companies followed from search results, and significant gains in more members joining groups from search. 

All these improvements have shown that our members are finding even more value on LinkedIn through search. We’ve also seen an increase in search sessions success rate, which is a metric to measure search quality and is defined as the percentage of search sessions with at least one satisfied click.

product-mock-of-the-linkedin-search-experience-with-advanced-filters

Product mock of the LinkedIn Search experience with advanced filters

Learnings

The incredible journey of this rewrite has proved again that change is the only constant. Every large-scale initiative will have bumps along the way during inception, planning, execution, and ramp. Being agile, having checks and balances for each phase, and over-communicating helped the team navigate through unknowns, including remote development. 

On a technical front, we learned the following about designing a new scalable platform: 

  • Lock down the contract between the platform and its use cases as early as possible. This creates a clean separation between the two, so that each can iterate independently for a faster time to market without breaking other use cases. 

  • It’s essential to have monitoring, tracking, and checks built into the framework to identify if a problem is in the platform or a use case. 

  • Finalize the support and operations model between the platform and its use cases during project inception itself to ensure smooth release and maintenance. 

  • Keep a high bar of performance and latency considerations in mind while making decisions at all stages of the project to ensure best performance.

The journey was neither easy nor predictable, but our agreed joint vision acted as a strength during difficult times. Investing early on in our goals and principles helped us push through the challenging aspects of the project, including the pandemic. 

Acknowledgments

Launching a new search experience across all platforms has been a cross-functional, cross-team effort. Throughout this journey, we’d like to acknowledge the contributions of Flagship Search Team (Apps, Design, Data Science, SRE, Product, AI, TPM), LinkedIn Search Infrastructure, Flagship Infra, Events, Groups, Identity, Messaging, Feed, LTS, LMS, Learning, Product Quality Team (UAPe), and LinkedIn Performance Teams, without whom this would not have been possible. We would like to give a shout out to Alice Xiong, Meling Wu, and Vivek Tripathi for providing help reviewing the blog and valuable feedback.