LinkedIn Lite: A Lightweight Mobile Web Experience

March 22, 2018

The opportunity in India

India is a mobile-first country, with 71% of the population having only a mobile internet connection and accessing the internet only via mobile.  Also, 85% of the mobile population accesses the internet on an Android device, with UCWeb being the preferred browser of choice followed by Chrome and Opera respectively.

LinkedIn India has more than 47 million members and is one of LinkedIn’s biggest markets, second only to the US. Of the thousands of new members signing-up for LinkedIn in India every day, a staggering 55% do so via mobile.

For this rapidly-growing, mobile-first market, we introduced the LinkedIn Lite web experience in 2016. Later, in 2017, we expanded the LinkedIn Lite experience to members in 60+ countries, providing a mobile web experience that was sometimes four times faster than the regular site. In this blog post, we discuss the process of identifying the market fit for LinkedIn Lite, analyzing our site performance for India’s mobile-first population, and the performance optimizations we added to the LinkedIn Lite app.

The case for a light-weight mobile web experience

In India, 85% of LinkedIn members using a mobile device use an Android phone, with roughly 75% of them accessing LinkedIn using the Chrome browser. In contrast, other popular sites in the country, such as e-commerce giants Amazon and Flipkart, might have a different traffic distribution across browsers, with even a majority of users accessing the site from UCWeb, a browser that employs server-side compression to reduce page load times. In short, we needed to have a mobile experience in India that was on par (or better) than the experiences our members were getting on other sites. Added to that was the fact that more and more Indians were connecting to LinkedIn every day, on networks all across the country—we were worried that even minor app performance issues could be compounded by slowdowns elsewhere in the network.

The older mobile web experience was built several years ago on a Node.js stack using jQuery and other libraries, and it wasn’t performant, with page load times at 90th percentile often not within LinkedIn’s site speed performance goals. At the time, we believed that the primary reason for this discrepancy might have been the site’s performance in countries with low bandwidth.

Performance analysis
Around April 2016, we did a deep-dive analysis of the performance bottlenecks of the LinkedIn mobile frontend and deduced the following:

  • The DNS lookup times, if the browser/OS hadn’t cached the DNS, could be as long as a second, but this was quite rare unless the network was flaky and the reception was poor. The number of non-zero DNS lookup times was quite low overall.

  • The connect times in India varied a lot, and at times took as long as 2 seconds!

  • The old LinkedIn mobile stack had many redirects into different systems to determine the appropriate experience for the member, since there were a couple of independent stacks serving during that time. Depending on several factors, including SSL, there were roughly 2 to 4 redirects at times. On a slow connection (lower 3G speeds or during network congestion), the time spent in redirects was about 3.7 seconds on an average. The time to first byte (TTFB) was about 5.6 seconds.

Effect of payload and large JS bundles on performance
The previous generation mobile web stack was shipping over 500 KiB of JavaScript to boot its client-side rendered application. With such payloads, the problems are two-fold:

  1. The payload size is directly proportional to the time to transfer the content. Without boring you with too many details, the rough amount of time it takes to transmit payloads grows exponentially, not accounting for packet loss. It takes a maximum of 1.2 seconds for every round-trip time (RTT) in India on most cellular networks. Assuming no packet loss, 13 KiB is transferred in 1 RTT, 39 KiB takes 2 RTTs, up to 91 KiB can be transmitted in the third RTT, and so on. So it’s no surprise that the lesser the payload, the faster the transfer times.

  2. It has been established time and again that large JavaScript bundles, especially the ones that are required to initialize the app, suffer with parse and compile times. We also concluded with our research that JavaScript parse/compile times are typically much longer on Android phones compared to iOS phones due to differences in processor architecture and performance. Since JavaScript is parsed/compiled on a single core, it takes longer to bootstrap larger bundles on Android phones, which are the most prevalent in India.

Project Bolt, a simple beginning

As the product team established a goal of a page load time of 6 seconds on a 2G/low 3G (100-150 Kbps approximately) network, the engineering team established the following tenets.

Server-side rendering (SSR)
Rendering the first view on the server gives a good perceived performance for end users and in turn improves overall member experience. SSR is also faster to render in cold cache/first visit situations, which is a good majority of the use-cases when:

  1. Members visit the site for the first time or if the browser has expired out the asset cache—the visit can also be from an email client or a push channel.

  2. The site has been updated recently since the member’s last visit, especially given the fact that we deploy almost daily at LinkedIn.

Smaller payload
A smaller payload means fewer roundtrips, and hence a faster transfer time.

By capping the payload to less than 90 KiB (we set an internal limit of 75), the payload can be transferred in roughly 3 RTTs. Assuming a slow network or a network with congestion, where 1 RTT is over a second or more, the total transfer would be under 4 seconds, giving enough head room. Apart from the faster transfer speed, having a smaller payload also helps with parsing times, especially for JavaScript.

Less JavaScript
Shipping less JavaScript to the browser directly translates to faster parse/compile times, and hence, a faster, snappier UI.

There was a detailed discussion and quite a lot of research around this, which is captured succinctly in this thread. While the thread is on Ember, it’s true for almost all use-cases and applies to all of the popular frameworks that are in use today.

The high-level overview is that parsing times for JavaScript are generally faster in high-powered iOS chips compared to multi-core lower-powered Android chips. The latter are the most prevalent in emerging markets like India.

This hypothesis was proved by Addy Osmani from Google in a recently published article on the cost of JavaScript.

No redirects
As called out earlier, redirects are a huge performance penalty in general, and even more so in emerging markets, where bandwidth is low and network congestion is high. By removing redirects, we can save up to 3.5 seconds as compared to the legacy mobile web site.

Leverage the (modern) browser
Based on our research, we found out that about 70% of members were using Chrome on an Android device, which means they’re automatically updated most of the time to run the latest browser that has all the latest goodness like Promises, the Fetch API, etc. That means there’s no need to ship polyfills for 70% of the population, making our JavaScript bundle smaller, directly translating to faster parse times and a better-performing website.

LinkedIn Lite architecture

  • LinkedInLite2

Since SSR was one of the architecture tenets of Lite, our choices for the libraries and frameworks we could choose were limited, since many of them support only client-side rendering of components. React could have been one of the choices if not for the library size, which was 34 KiB (compressed). Adding app code would exceed our size limits and hence fail to meet the performance goals of Lite. So we decided to go with the Play framework, an asynchronous, lightweight Java framework, which had been battle-tested at LinkedIn for years. Our templating choice became Dust.js, since LinkedIn’s Play framework played (pun unintended) nicely with Dust.js.

For managing asynchronous tasks, we decided to use Parseq, another open source library created by LinkedIn, which is an excellent choice for doing non-blocking, asynchronous task management. When combined with ParseqRestClient, it is an excellent choice for doing asynchronous I/O to downstream services. It also allows you to compose tasks, which has been one of its strengths when compared to other libraries we have used in the past.

For minimizing JavaScript, we decided that the server would always talk HTML—no client-side templating, and Dust.js isn’t shipped to the client at all! The architecture is completely reimagined from ground up to optimize the experience for mobile devices while taking advantage of the latest APIs that web browsers offer. This is the crucial decision that made LinkedIn Lite truly lightweight, making it possible to achieve our goal of rendering every page in under 6 seconds, even under poor network conditions and on low-end devices. The other main reason to use HTML as opposed to JSON is that the former is a “streaming friendly” format, while the latter isn’t. If the browser needs to parse the JSON, it has to wait until the entire object is downloaded for a valid JSON payload, and only then can it be parsed by the JavaScript.

For user interactions, we used the Fetch API, which retrieves HTML from the server and updates the DOM in the client.

We were six engineers and it took us 4 months in total from start to finish to write a lightweight that outperformed the legacy mobile site by miles. On a throttled 100 Kbps connection, the new site was loading and interactive in under 6 seconds, as per our goal. Given these improvements in site speed, it was not surprising to see a significant growth in member engagement in India—including a four-fold increase in job applications, twice as many sessions, and 3 times the overall of member engagement on LinkedIn—after Lite’s launch.


One of the most important factors for Lite’s success is that we analyzed the bottlenecks in the mobile application in India and made them the core tenets to architect a system that would not suffer the same problems. It involved working with the infrastructure teams and also understanding the entire stack from the ground up before deciding on an architecture, rather than relying on a magic sauce to solve our problems.

We will follow up with a blog post on how we transformed LinkedIn Lite into a progressive web app without sacrificing speed or needing a huge rewrite. We have adopted a very unique approach that combines SSR withservice workers for speed and transforms the SSR app into a progressive web app.


The initial team consisted of yours truly (Gopal Venkatesan), Prateek Sachdev, Kaushik Srinivasan, Snehal Mhaske, Prashant Momale, Ramitha Chitloor, Neena Jose, Lisha Prakash, and Gaurav Gupta. LinkedIn Lite wouldn’t have been possible without the initial idea from our friendly product manager Ajay Datta. Special thanks to Raghu Hiremagalur, Ganesan Venkatasubramanian, Doug Young, and Brian Geffon for providing help and guidance along the way.