Performance Articles

  • sitespeed1

    Fixing the Plumbing: How We Identify and Stop Slow Latency Leaks at LinkedIn

    October 31, 2017

    At LinkedIn, we pay attention to site speed at every step of the release process, from code development to production ramp. But inevitably, the performance of our pages degrades over time (we use the word “pages” to denote webpages as well as mobile apps). In this post, we go over the tools and processes we use to catch and fix these degradations. While this is...

  • cpu1

    Common Issue Detection for CPU Profiling

    September 5, 2017

    Co-authors: John Nicol, Chen Li, Peinan Chen, Tao Feng, and Hari Ramachandran LinkedIn has a centralized approach for profiling services that has helped identify many performance issues. However, many of those issues are common across multiple services. In this blog post, we will discuss how we have enhanced our approach to also detect and report common...

  • TCP1

    The TCP Tortoise: Optimizations for Emerging Markets

    August 11, 2017

    Serving fast pages is a core aspiration at LinkedIn. As part of this initiative, we continuously experiment and study the various layers of our stack and identify optimizations to ensure that we use the most optimal protocols and configurations at every layer. As LinkedIn migrated to serving its pages on HTTP/2 earlier this year, it became imperative that we...

  • Trafficshift2

    TrafficShift: Load Testing at Scale

    May 11, 2017

    Co-authors: Anil Mallapur and Michael Kehoe   LinkedIn started as a professional networking service in 2003, serving user requests out...

  • brotli2

    Boosting Site Speed Using Brotli Compression

    May 2, 2017

    Site speed is one of LinkedIn’s major engineering priorities, as faster site speed directly correlates with higher engagement. There...

  • instant_job2

    Implementing Instant Job Listing Pages

    April 20, 2017

    In early February, we reduced our 90th percentile U.S. subsequent page load time by 46% on our Job Details page. We achieved this by...