Infrastructure

Solving Espresso’s scalability and performance challenges to support our member base

Espresso is the database that we designed to power our member profiles, feed, recommendations, and hundreds of other Linkedin applications that handle large amounts of data and need both high performance and reliability. As Espresso continued to expand in support of our 950M+ member base, the number of network connections that it needed began to drive scalability and resiliency challenges. To address these challenges, we migrated to HTTP/2. With the initial Netty based implementation, we observed a 45% degradation in throughput which we needed to analyze then correct.

In this post, we will explain how we solved these challenges and improved system performance. We will also delve into the various optimization efforts we employed on Espresso's online operation section, which resulted in a 75% performance boost.

Espresso Architecture

Graphic of Espresso System Overview

Figure 1.  Espresso System Overview

Figure 1 is a high-level overview of the Espresso ecosystem, which includes the online operation section of Espresso (the main focus of this blog post). This section comprises two major components - the router and the storage node. The router is responsible for directing the request to the relevant storage node and the storage layer's primary responsibility is to get data from the MySQL database and present the response in the desired format to the member. Espresso utilizes the open-source framework Netty for the transport layer, which has been heavily customized for Espresso’s needs. 

Need for new transport layer architecture

In the communication between the router and storage layer, our earlier approach involved utilizing HTTP/1.1, a protocol extensively employed for interactions between web servers and clients. However, HTTP/1.1 operates on a connection-per-request basis. In the context of large clusters, this approach led to millions of concurrent connections between the router and the storage nodes. This resulted in constraints on scalability, resiliency, and numerous performance-related hurdles.

Scalability: Scalability is a crucial aspect of any database system, and Espresso is no exception. In our recent cluster expansion, adding an additional 100 router nodes caused the memory usage to spike by around 2.5GB. The additional memory can be attributed to the new TCP network connections within the storage nodes. Consequently, we experienced a 15% latency increase due to an increase in garbage collection. The number of connections to storage nodes posed a significant challenge to scaling up the cluster, and we needed to address this to ensure seamless scalability.

Resiliency: In the event of network flaps and switch upgrades, the process of re-establishing thousands of connections from the router often breaches the connection limit on the storage node. This, in turn, causes errors and the router to fail to communicate with the storage nodes. 

Performance: When using the HTTP/1.1 architecture, routers maintain a limited pool of connections to each storage node within the cluster. In some larger clusters, the wait time to acquire a connection can be as high as 15ms at the 95th percentile due to the limited pool. This delay can significantly affect the system's response time.

We determined that all of the above limitations could be resolved by transitioning to HTTP/2, as it supports connection multiplexing and requires a significantly lower number of connections between the router and the storage node.

HTTP/2 Implementation

We explored various technologies for HTTP/2 implementation but due to the strong support from the open-source community and our familiarity with the framework, we went with Netty. When using Netty out of the box, the HTTP/2 implementation throughput was 45% less than the original (HTTP/1.1) implementation.  Because the out of the box performance was very poor, we had to implement different optimizations to enhance performance.

The experiment was run on a production-like test cluster and the traffic is a combination of access patterns, which include read and write traffic. The results are as follows:

Protocol QPS Single Read Latency (P99) Multi-Read Latency (P99)
HTTP/1.1 9K 7ms 25ms
HTTP/2 5K (-45%) 11ms (+57%) 42ms (+68%)

On the routing layer, after further analysis using flame graphs, major differences between the two protocols are shown in the following table.

CPU overhead HTTP/1.1 HTTP/2
Acquiring a connection and processing the request 20% 32% (+60%)
Encode/Decode HTTP request 18% 32% (+77%)

Improvements to Request/Response Handling

Reusing the Stream Channel Pipeline

One of the core concepts of Netty is its ChannelPipeline. As seen in Figure 2, when the data is received from the socket, it is passed through the pipeline which processes the data. Channel Pipeline contains a list of Handlers, each working on a specific task.

Diagram of Netty Pipeline

Figure 2. Netty Pipeline

In the original HTTP/1.1 Netty pipeline, a set of 15-20 handlers was established when a connection was made, and this pipeline was reused for all subsequent requests served on the same connection. 

However, in HTTP/2 Netty's default implementation, a fresh pipeline is generated for each new stream or request. For instance, a multi-get request to a router with over 100 keys can often result in approximately 30 to 35 requests being sent to the storage node. Consequently, the router must initiate new pipelines for all 35 storage node requests. The process of creating and dismantling pipelines for each request involving a considerable number of handlers turned out to be notably resource-intensive in terms of memory utilization and garbage collection.

To address this concern, a forked version of Netty's Http2MultiplexHandler has been developed to maintain a queue of local stream channels. As illustrated in Figure 3, on receiving a new request, the multiplex handler no longer generates a new pipeline. Instead, it retrieves a local channel from the queue and employs it to process the request. Subsequent to request completion, the channel is returned to the queue for future use. Through the reuse of existing channels, the creation and destruction of pipelines are minimized, leading to a reduction in memory strain and garbage collection.

Sequence diagram of stream channel reuse

Figure 3. Sequence diagram of stream channel reuse

Addressing uneven work distribution among Netty I/O threads 

When a new connection is created, Netty assigns this connection to one of the 64 I/O threads. In Espresso, the number of I/O threads is equal to twice the number of cores present. The I/O thread associated with the connection is responsible for I/O and handling the request/response on the connection. Netty's default implementation employs a rudimentary method for selecting an appropriate I/O thread out of the 64 available for a new channel. Our observation revealed that this approach leads to a significantly uneven distribution of workload among the I/O threads. 

In a standard deployment, we observed that 20% of I/O threads were managing 50% of all the total connections/requests. To address this issue, we introduced a BalancedEventLoopGroup. This entity is designed to evenly distribute connections across all available worker threads. During channel registration, the BalancedEventLoopGroup iterates through the worker threads to ensure a more equitable allocation of workload

After this change, during registering of a channel, an event loop with the number of connections below the average is selected.

private EventLoop selectLoop() {
 int average = averageChannelsPerEventLoop();
 EventLoop loop = next();
 if (_eventLoopCount > 1 && isUnbalanced(loop, average)) {
   ArrayList<EventLoop> list = new ArrayList<>(_eventLoopCount);
   _eventLoopGroup.forEach(eventExecutor -> list.add((EventLoop) eventExecutor));
   Collections.shuffle(list, ThreadLocalRandom.current());
   Iterator<EventLoop> it = list.iterator();
   do {
     loop = it.next();
   } while (it.hasNext() && isUnbalanced(loop, average));
 }
 return loop;
}

Reducing context switches when acquiring a connection 

In the HTTP/2 implementation, each router maintains 10 connections to every storage node. These connections serve as communication pathways for the router I/O threads interfacing with the storage node. Previously, we utilized Netty's FixedChannelPool implementation to oversee connection pools, handling tasks like acquiring, releasing, and establishing new connections. 

However, the underlying queue within Netty's implementation is not inherently thread-safe. To obtain a connection from the pool, the requesting worker thread must engage the I/O worker overseeing the pool. This process led to two context switches.  To resolve this, we developed a derivative of the Netty pool implementation that employs a high-performance, thread-safe queue. Now, the task is executed by the requesting thread instead of a distinct I/O thread, effectively eliminating the need for context switches.

Improvements to SSL Performance

The following section describes various optimizations to improve the SSL performance.

Offloading DNS lookup and handshake to separate thread pool

During an SSL handshake, the DNS lookup procedure for resolving a hostname to an IP address functions as a blocking operation. Consequently, the I/O thread responsible for executing the handshake might be held up for the entirety of the DNS lookup process. This delay can result in request timeouts and other issues, especially when managing a substantial influx of incoming connections concurrently.  

To tackle this concern, we developed an SSL initializer that conducts the DNS lookup on a different thread prior to initiating the handshake. This method involves passing the InetAddress, that contains both the IP address and hostname, to the SSL handshake procedure, effectively circumventing the need for a DNS lookup during the handshake.

Enabling Native SSL encryption/decryption

Java's default built-in SSL implementation carries a significant performance overhead. Netty offers a JNI-based SSL engine that demonstrates exceptional efficiency in both CPU and memory utilization. Upon enabling OpenSSL within the storage layer, we observed a notable 10% reduction in latency. (The router layer already utilizes OpenSSL.)  

To employ Netty Native SSL, one must include the pertinent Netty Native dependencies, as it interfaces with OpenSSL through the JNI (Java Native Interface). For more detailed information, please refer to https://netty.io/wiki/forked-tomcat-native.html.

Improvements to Encode/Decode performance

This section focuses on the performance improvements we made when converting bytes to Http objects and vice versa. Approximately 20% of our CPU cycles are spent on encode/decode bytes. Unlike a typical service, Espresso has very rich headers. Our HTTP/2 implementation involves wrapping the existing HTTP/1.1 pipeline with HTTP/2 functionality. While the HTTP/2 layer handles network communication, the core business logic resides within the HTTP/1.1 layer. Due to this, each incoming request required the conversion of HTTP/2 requests to HTTP/1.1 and vice versa, which resulted in high CPU usage, memory consumption, and garbage creation.

To improve performance, we have implemented a custom codec designed for efficient handling of HTTP headers. We introduced a new type of request class named Http1Request. This class effectively encapsulates an HTTP/2 request as an HTTP/1.1 by utilizing wrapped Http2 headers. The primary objective behind this approach is to avoid the expensive task of converting HTTP/1.1 headers to HTTP/2 and vice versa.

For example:

public class Http1Headers extends HttpHeaders {

 private final Http2Headers _headers; 

 ….

}

And Operations such as get, set, and contains operate on the Http2Headers:

@Override
public String get(String name) {
 return str(_headers.get(AsciiString.cached(name).toLowerCase());
}

To make this possible, we developed a new codec that is essentially a clone of Netty's Http2StreamFrameToHttpObjectCodec. This codec is designed to translate HTTP/2 StreamFrames to HTTP/1.1 requests/responses with minimal overhead. By using this new codec, we were able to significantly improve the performance of encode/decode operations and reduce the amount of garbage generated during the conversions.

Disabling HPACK Header Compression

HTTP/2 introduced a new header compression algorithm known as HPACK. It works by maintaining an index list or dictionaries on both the client and server. Instead of transmitting the complete string value, HPACK sends the associated index (integer) when transmitting a header. HPACK encompasses two key components: 

  1. Static Table - A dictionary comprising  61 commonly used headers.

  2. Dynamic Table - This table retains the user-generated header information.

The Hpack header compression is tailored to scenarios where header contents remain relatively constant. But Espresso has very rich headers with stateful information such as timestamps, SCN, and so on. Unfortunately, HPACK didn't align well with Espresso's requirements.

Upon examining flame graphs, we observed a substantial stack dedicated to encoding/decoding dynamic tables. Consequently, we opted to disable dynamic header compression, leading to an approximate 3% enhancement in performance.

In Netty, this can be disabled using the following:

Http2FrameCodecBuilder.forClient()
   .initialSettings(Http2Settings.defaultSettings().headerTableSize(0));

Results

Latency Improvements

P99.9 Latency HTTP/1.1 HTTP/2
Single Key Get 20ms 7ms (-66%)
Multi Key Get 80ms 20ms (-75%)

We observed a 75% reduction in 99th and 99.9th percentile multi-read and read latencies, decreasing from 80ms to 20ms.

Image of Latency reduction after HTTP/2

Figure 4. Latency reduction after HTTP/2

We observed similar latency reductions across the 90th percentile and higher.  

Reduction in TCP connections

  HTTP/1.1 HTTP/2
No of TCP Connections 32 million 3.9 million (-88%)

We observed an 88% reduction in the number of connections required between routers and storage nodes in some of our largest clusters.

Image of the Total number of connections after HTTP/2

Figure 5. Total number of connections after HTTP/2

Reduction in Garbage Collection time

We observed a 75% reduction in garbage collection times for both young and old gen.

GC HTTP/1.1 HTTP/2
Young Gen 2000 ms 500ms (+75%)
Old Gen 80 ms 15 ms (+81%)
Image that shows the reduction in time for GC after HTTP/2

Figure 6. Reduction in time for GC after HTTP/2

Waiting time to acquire a Storage Node connection

HTTP/2 eliminates the need to wait for a storage node connection by enabling multiplexing on a single TCP connection, which is a significant factor in reducing latency compared to HTTP/1.1.

  HTTP/1.1 HTTP/2
Wait time in router to get a storage node connection 11ms 0.02ms (+99%)
Image of the reduction is wait time to get a connection after HTTP/2

Figure 7. Reduction is wait time to get a connection after HTTP/2

Conclusion

Espresso has a large server fleet and is mission-critical to a number of LinkedIn applications. With HTTP/2 migration, we successfully solved Espresso’s scalability problems due to the huge number of TCP connections required between the router and the storage nodes. The new architecture also reduced the latencies by 75% and made Espresso more resilient. 

Acknowledgments

I would like to thank my colleagues Antony Curtis, Yaoming Zhan, BinBing Hou, Wenqing Ding, Andy Mao, and Rahul Mehrotra who worked on this project. The project demanded a great deal of time and effort due to the complexity involved in optimizing the performance. I would like to thank Kamlakar Singh and Yun Sun for reviewing the blog and providing valuable feedback. 

We would also like to thank our management Madhur Badal, Alok Dhariwal and Gayatri Penumetsa for their support and resources, which played a crucial role in the success of this project. Their encouragement and guidance helped the team overcome challenges and deliver the project on time.