Data Management

Upscaling LinkedIn's Profile Datastore While Reducing Costs

Co-Authors: Estella Pham and Guanlin Lu

At peak, LinkedIn serves over 4.8 million member profiles per second. The number of requests to our storage infrastructure doubles every year.  In the past, we addressed latency, throughput and cost issues by migrating off Oracle onto Espresso, an open-source document platform, and adding more nodes. We are now at the point where some of the core components are straining under the increasing load, and we can no longer address scaling concerns with the node addition strategy.   

Instead, we chose to introduce Couchbase as a centralized storage tier cache for read scaling. This solution achieved a cache hit rate of over 99%, reduced tail latencies by more than 60%, and trimmed the cost to serve by 10% annually.  The decision also set forth a number of engineering problems because the cache isn’t backed by a primary storage infrastructure. In this blog post we’ll discuss our decision to leverage Couchbase, the challenges that arose, and how we addressed each challenge in our final solution.

Background

Before we dive into the changes made to adopt Couchbase, we want to provide the readers with some important background information and context. 

The majority of profile requests are read. Historically, Profile backend had employed a centralized cache using memcached as a read cache between the application and the database. The cache solution did not work well for us. We experienced performance degradation during cache expansion, node replacement, and cache warm up issues.  Maintenance of memcached was challenging.

When we migrated from Oracle to Espresso in 2014, we no longer needed a read cache because of the built-in scalability and latency characteristics of Espresso. We succeeded in scaling Espresso profile datastore to support service call count over 1.4 million QPS at peak by expanding the cluster size of the backend application along with Espresso.  Eventually, that approach hit the upper limit on the Espresso shared components, and raising them would require a substantial engineering investment.  

Member profiles are stored in Espresso in Avro binary format. Espresso supports document schema and the schema can evolve over time. The schema version number is incremented with every update.  

When a LinkedIn member updates their profile, the request is sent to the Profile backend application which serializes the change by converting the document into an Avro binary format before persisting it in Espresso. When someone views a LinkedIn profile, the Profile frontend application sends a read request to the Profile backend application which fetches the profile from Espresso, deserializes the response converting the Avro binary formatted content to a document to return to the frontend application.  

A profile view or update travels through multiple applications as summarized in Figure 1.  Espresso consists of routers and storage nodes. An Espresso router serves as a proxy for profile requests, deciding which database partition holds a member’s profile using the unique identifier of the record, and directing the read/write request to the right Espresso storage node. Every Espresso router has an off-heap cache (OHC).

Illustration of Data flow for a profile view or update

Figure 1.  Data flow for a profile view or update 

A datum reader is used to deserialize an Avro binary content.  It requires both the writer’s schema (one representing the profile document when it was persisted to Espresso) and the reader’s schema (one currently used by the application).  Both the writer’s schema and the reader’s schema are required because they can differ due to the schema evolution.  Converting a profile from an earlier schema’s version to the latest version is referred to as the schema upconversion. Prior to Couchbase adoption, the schema upconversion took place in Espresso storage nodes.   

Every Espresso router has an OHC that is configured with the “Window Tiny-LFU” cache eviction policy to retain frequently accessed records. OHC is proven to be highly efficient as a hot key cache.  However, for the LinkedIn member profile use case, OHC cache has a low cache hit rate for two reasons. OHC is a local cache on a router and only sees the read requests to the given router.  Profile requests employ projection (i.e., applications request only the fields of interest rather than the entire document) which further reduces the cache hit rate. 

At LinkedIn, we have used Couchbase as a distributed key-value cache for various applications.  It is chosen for the enhancements that it confers over memcached, including data persistence between server restarts, replication such that one node can fail out of a cluster with all documents still being available, and dynamic scalability in that nodes can be added or removed with no downtime.

Profile upscale solution

Because profile requests are characteristically dominated by read (>99% read and <1% write), caching was one option we explored when we considered how to expand the profile Espresso datastore.  Note that the profile Espresso cluster had grown to a point where adding more hardware to expand its capacity as we had done was no longer an option without a major reengineering effort.   Because Couchbase is widely used at LinkedIn, it is natural that we considered it for our use case.  Furthermore, to use Couchbase as an upscaling solution for Espresso, the Couchbase cache can reside in the storage layer, affording the application owners the benefit of not having to deal with the caching internals.  

However, for any cache to be used for the purpose of upscaling, it must operate completely independent from the source of truth (SOT) and must not be allowed to fall back to the SOT on failures.  We addressed these requirements by solving three design principles.  

Cache design principles

The three design principles for a cache that is used for upscaling:

  1. A guaranteed resilience against any Couchbase failures

  2. An all-time cached data availability

  3. A strictly defined Service Level Objective (SLO) on data divergence between the SOT and the cache.

Resilience against any Couchbase failures 

Resilience against any Couchbase failures is critical since it is not allowed to fall back to the SOT.  We implemented the following guardrails to protect important components of the integrated system and prevent a cache failure from creating a cascade of failures.

  1. Espresso router:  We maintain a Couchbase health monitor in every router to track the health of all Couchbase buckets the router has access to. The health monitor evaluates a bucket’s health based on the request exception rate against a predetermined threshold.  If a bucket is unhealthy, the router will stop dispatching any new requests to the bucket, preventing requests from accumulating in the router’s heap, leading to client timeouts and downing a router.

  2. Couchbase leader’s replica failure: We choose to have 3 replicas for Profile data (a leader’s replica and 2 followers’ replicas) and implement the leader-then-follower interface (API).  Every request is fulfilled by the leader’s replica.  If the leader’s replica fails, the router would fetch the data from a follower’s replica.

  3. Retrying eligible failed Couchbase requests:  Despite the fact that a Couchbase failure can be caused by any possible reasons, a Couchbase exception is categorized by 1 of the 3 following categories: an Espresso router issue, a networking issue, or a Couchbase server side issue.  For the first two categories, we retry the failed Couchbase request on a different router with the assumption that the issue may be localized and transient, and a retry may succeed.  

All-time cached data availability

To make cached data available everywhere all the time including during a datacenter traffic failover, we elect to cache the entire Profile dataset in every data center.  It is a good option because profile payload is small: the 95th percentile is less than 24 KB.  

It would be trivial to make data available in every data center with an infinite Time-To-Live (TTL) for every cache record.  That however would lead to a permanent data divergence, caused by missing deletes for example (i.e. a deletion occurred in the Espresso database but got lost before reaching Couchbase).  Despite missing deletions are rare events, they may still happen when, for example, a Kafka infrastructure outage can cause certain events to fall off the retention window before they can be consumed by the cache updater.  As a result, we choose to set a finite TTL for the cached data to ensure that any expired records will be purged from Couchbase.  

To prevent Couchbase cache from becoming cold or divergent from the SOT, we periodically bootstrap Couchbase.  The bootstrapping period is within the cached data TTL to guarantee no records that exist in Espresso would expire before the next bootstrapping.

Data divergence prevention

Race conditions among components that have write access to Couchabse cache (an Espresso router, the cache bootstrapper and the cache updater) can lead to data divergence.  To prevent it from happening, we order the cache updates of the same key in order to prevent a stale record from being inserted into Couchbase.  To order the updates, we compare the System Change Number (SCN) values associated with cache records. Conceptually, SCN is a logical timestamp: For each database row committed, Espresso storage engine produces a SCN value and persists it in the binlog.  The order of SCN values reflect the commit order within a data center, cluster, database and database partition.

To be consistent with the SOT Espresso Database, our system coordinates update operations to Couchbase via SCN comparisons and all updates to Couchbase follow the Last-Writer-Win (LWW) reconciliation – given the same key, the record with the largest SCN always replaces the existing one in Couchbase.  To memorize deletes, we have the cache updater upsert tombstone records into Couchbase rather than issuing hard deletions.  A tombstone record differs from a regular cache record in that it contains only an SCN as its data payload.  It is however subjected to the same purge policy as a regular record.

Our system also compares the SCN in the storage node’s response and that of the cache record. It updates the Couchbase with the storage node’s response if its SCN value is the same or greater.  The former is intended to handle a cache miss due to an expired TTL, or when the read request for the key has a higher document schema version.

To handle concurrent modifications that may occur while routers and/or cache updaters attempt to update a cache entry, we use Couchbase Compare-And-Swap (CAS) to detect concurrent updates and retry the update if necessary. The CAS value represents the current state of a record stored in Couchbase: Each time a record is  modified, its CAS value is updated.

Hybrid cache strategy

Architecturally, the Espresso integrated Couchbase cache consists of the Espresso router, the cache updater, the cache bootstrapper and the Couchbase cluster (see Figure 2).  We converted from the existing cache strategy that employed OHC in the routers to the hybrid cache strategy implementing both OHC for hot keys and Couchbase for all reads that cannot be satisfied by OHC.  

Espresso router has the sole responsibility of determining whether a read request can be served by the cache tier or a storage node, and which cache tier – OHC or Couchbase.  The cache contract between an application and Espresso is set with the HTTP header called the staleness bound header.  Applications can enable or disable cache, and set cache staleness tolerance per request using the staleness bound header.  Based on the staleness bound value, Espresso determines where the request can be served.  

One major advantage of the Espresso integrated cache is that Espresso abstracts away all of the caching internals, freeing the application owners to focus on just the business logic. 

Diagram of a Espresso integrated Couchbase cache

Figure 2.  Espresso integrated Couchbase cache.  Application requests are in blue, responses are in  green.  A cache record is a full HTTP response that holds an Espresso response containing a profile document in Avro/binary format

Read path

When the Profile backend application sends a read request to an Espresso router, the router evaluates if it is a cacheable request based on the table schema configuration and the value set in the staleness bound request header. The router then determines if the key is in its OHC.  A request is sent to the Couchbase when the key does not exist in the local OHC, or when the key exists in the OHC but its content is too stale. With the latter, the read request is sent to Couchbase, and which can generate a cache hit, or a cache miss.  A cache miss requires the request to be served by a storage node. In that case, the router sends the document back to the Profile backend application while asynchronously upserting it into the hybrid Cache tier. 

Write path

Cached profiles in Couchbase must be kept in sync with the data in Espresso.  We implemented a cache updater and a cache bootstrapper via Samza jobs. Each consumes Espresso change events from a Brooklin change capture stream, or a Brooklin bootstrap stream, respectively, and upserts into the Couchbase cache (See Figure 2). The Brooklin change capture stream is populated with database rows that had been committed to the SOT whereas the Brooklin bootstrap stream is populated with a periodically generated database snapshot.

Since the Brooklin streams are nearline, the database changes synchronized from  the Espresso datastore to Couchbase follow the eventual consistency principle. Espresso-integrated Couchbase cache needs to cope with race conditions raised by multiple writers to the same Couchbase cache. To prevent data divergence, we introduced a logical timestamp to order the writes (see “Data Divergence Prevention” for more details.)

Full document

While it is optimal for applications to apply projection to reduce the payload of the responses for performance reasons, transitioning to adopt the Espresso integrated hybrid cache required us to evaluate projection and where it should be applied.  The number of projections used by applications is large and varied.  If we were to cache projected requests, the cache solution would not be very efficient because the possible projection permutation is immense.  Moreover, since the majority of profiles are small, we decided to cache the full profile dataset.  Projection is continued to be supported but the responsibility to apply projection on full profiles has moved from the Espresso storage nodes to the Profile backend application.     

Profile backend changes

With Espresso now returning a full document to the Profile backend as it was written (i.e., based on the profile schema version used in the last update), the responsibility to perform the schema upconversion lies with the Profile backend.  

Schema upconversion

Prior to Couchbase, when the Profile backend sent Espresso a read request, it set the accept-type in the HTTP request header to the latest Avro schema version registered, for example, version 55. Data returned by Espresso conformed to the schema version 55, and the datum reader used schema version 55 when it deserialized the Avro binary.

With Couchbase and the profile schema had evolved for example to version 70, when the Profile backend sends Espresso a read request, it sets the accept-type in the HTTP request header to version 0 to tell Espresso to return the data as it was written. The Profile backend now performs the schema upconversion transforming the record written with schema version 55 to the version 70.

Diagram of Deserialization flow after Couchbase adoption in the Profile backend

Figure 3.  Deserialization flow after Couchbase adoption in the Profile backend

Projection

Because profile documents are returned as full binaries from the Espresso routers, the Profile backend applies projection on the documents as dictated by the query projection parameter before deserializing it and returning it to the client.    

Registered schemas

Prior to Couchbase, the Profile backend cached all Avro schemas statically during startup.  The schema cache was not expected to change during the runtime of the application. If a new profile schema version 70 becomes available, the Profile backend would not have the version 70 until the next restart. This approach had worked because Espresso performed the schema upconversion to the latest version configured in the Profile backend.

When the schema upconversion moved to the Profile backend, the application must be able to fetch the latest registered schema from the registry when it requires the specific version for deserialization. This requirement comes from the deployment pipeline allowing a new software version to be limitedly deployed to a few instances, the canaries, where the new software version must pass predefined tests before it is allowed to deploy to the full cluster. For example, during a typical deployment, when a Profile backend canary updates a profile using the latest schema version 70, any other Profile backend instance which needs that schema version but still runs with the latest schema configured to version 69, must be able to fetch the schema version 70 from the schema registry.

Diagram of a Profile backend instance in the deployment queue

Figure 4.  A Profile backend instance in the deployment queue  

Pegasus datum reader

One optimization we made involves the use of a new datum reader. The previous datum reader converts an Avro/binary object to a GenericRecord, which then gets translated to a DataMap used to create a Profile document. The new datum reader bypasses the intermediate step with GenericRecord and converts an Avro/binary to a DataMap directly.  This change gained us a performance boost (see Table 2) and compensated for the performance hit incurred with the upconversion and projection relocating to the Profile backend.

Performance analysis

Tail latency reduction

The end-to-end latencies recording the round-trip time the Profile backend spends to fetch profiles from Espresso dropped significantly.  We categorize the read requests received by Espresso as single get when the requests contain single keys, and multi get when the requests contain multiple keys.  With the majority of the read requests being multi get requests, the 99th percentile latency dropped by 60.73%, and the 99.9th percentile latency by 63.66% (see Table 1).

Profile Latency Without CouchBase (ms) With CouchBase (ms) Reduction (ms) Reduction %
Single Get P99 4.184 3.984 -0.2 -4.78%
Single Get P99_9 25.64 15.15 -10.49 -40.91%
Multi Get P99 31.6 12.41 -19.19 -60.73%
Multi Get P99_9 66.87 24.3 -42.57 -63.66%

Table 1.  The round trip latencies between the Profile backend and Espresso

Cache effectiveness

Our Couchbase cache achieves a 99% cache hit rate (see Figure 5).

Image of Couchbase cache hit rate chart

Figure 5.  Couchbase cache hit rate

Pegasus datum reader

Using the Pegasus datum reader, the performance gains are seen across different percentiles (measured in microseconds) when conducting record deserialization for individual profile records. Comparing between the control datum reader (Espresso) and the treatment datum reader (Pegasus), the Pegasus datum reader performs much better across the board. The 95th percentile latency dropped by 34.1%, 99th percentile latency by 37.4% for, 99th percentile latency by 28%.

Percentile P50 P90 P95 P99 P99_9
Espresso (µs) 234.35 406.55 477.05 729.25 1080
Pegasus (µs) 138.3 261.95 314.4 456.25 777.15
Reduction (µs) -96.05 -144.60 -162.65 -273 -302.85
Reduction (%) -40.1 -35.6 -34.1 -37.4 -28.0

Table 2.  Deserialization of individual profile latencies measured in microseconds

Cost savings

With the Espresso hybrid cache tier, we are able to reduce the number of Espresso storage nodes by 90%. Since we also set up new infrastructure (e.g. the new Couchbase cluster, the Samza bootstrapping and updating jobs) and incur additional costs for the increase in Profile backend computing resources to handle the upconversion and projection (e.g., we set a maximum 30% buffer), we estimate that conservatively we save LinkedIn about 10% annually on the costs of servicing member profile requests.

Conclusion

The adoption of Espresso integrated Couchbase has enabled us to achieve our goals to support a growing LinkedIn member base, upscale profile datastore, lower the cost to serve, and to do all without impacting performance or member experiences.  

Acknowledgments

This project could not have been done without significant contributions from groups of people across LinkedIn for over 1.5 years.  The teams were the Espresso development team, the Espresso SRE team, the Couchbase team, the Identity Platform team, the Samza team, and the Identity SRE team.

Many thanks to Karthik Naidu DJ, Ben Weir, Michael Chletsos, Rick Ramirez, Bef Ayenew, Alok Dariwal, Madhur Badal, Gayatri Penumetsa for your leadership and support for this project.  

Many thanks to Jason Less for implementing changes on the Profile backend, and conducting performance analyses and validation; Jean-Francois Dejeans-Gauthier for valuable RB and RFC review feedbacks; Gaurav Mishra for designing and implementing Couchbase health monitor, root-causing and fixing issues throughout the Espresso Couchbase cache development; Antony Curtis for implementing concurrent update and SCN comparison in router and sharing idiomatic pattern with CompletableFuture; Keshav Bachu for implementing and improving cache updater and bootstrapper; Zhantong Shang for designing and implementing periodic cache bootstrapping scheduler; Laxman Prabhu, Yun Sun for the valuable feedback on the espresso caching design; Ning Xu for the early exploration work; Kamlakar Singh for providing valuable feedback on observability and resiliency; Hongyi Ma and Himanshu Gupta for general support near the project completion; Olu Owolabi for setting up dark cluster and configuration; William Nguyen for Profile backend performance assessment; Robert Engel for site operating feedbacks; Ben Weir for sizing, setting up and configuring Couchbase.  Zhengyu Cai for cost saving analyses.

Thanks to Shun Xuan Wang, Mahesh Vishwanath, Katherine Vaiente, Madhur BadalKarthik Naidu DJ and Rick Ramirez for editorial reviews.