LinkedIn Lite: A Lightweight Mobile Web Experience
March 22, 2018
The opportunity in India
India is a mobile-first country, with 71% of the population having only a mobile internet connection and accessing the internet only via mobile. Also, 85% of the mobile population accesses the internet on an Android device, with UCWeb being the preferred browser of choice followed by Chrome and Opera respectively.
LinkedIn India has more than 47 million members and is one of LinkedIn’s biggest markets, second only to the US. Of the thousands of new members signing-up for LinkedIn in India every day, a staggering 55% do so via mobile.
For this rapidly-growing, mobile-first market, we introduced the LinkedIn Lite web experience in 2016. Later, in 2017, we expanded the LinkedIn Lite experience to members in 60+ countries, providing a mobile web experience that was sometimes four times faster than the regular site. In this blog post, we discuss the process of identifying the market fit for LinkedIn Lite, analyzing our site performance for India’s mobile-first population, and the performance optimizations we added to the LinkedIn Lite app.
The case for a light-weight mobile web experience
In India, 85% of LinkedIn members using a mobile device use an Android phone, with roughly 75% of them accessing LinkedIn using the Chrome browser. In contrast, other popular sites in the country, such as e-commerce giants Amazon and Flipkart, might have a different traffic distribution across browsers, with even a majority of users accessing the site from UCWeb, a browser that employs server-side compression to reduce page load times. In short, we needed to have a mobile experience in India that was on par (or better) than the experiences our members were getting on other sites. Added to that was the fact that more and more Indians were connecting to LinkedIn every day, on networks all across the country—we were worried that even minor app performance issues could be compounded by slowdowns elsewhere in the network.
The older LinkedIn.com mobile web experience was built several years ago on a Node.js stack using jQuery and other libraries, and it wasn’t performant, with page load times at 90th percentile often not within LinkedIn’s site speed performance goals. At the time, we believed that the primary reason for this discrepancy might have been the site’s performance in countries with low bandwidth.
Around April 2016, we did a deep-dive analysis of the performance bottlenecks of the LinkedIn mobile frontend and deduced the following:
The DNS lookup times, if the browser/OS hadn’t cached the DNS, could be as long as a second, but this was quite rare unless the network was flaky and the reception was poor. The number of non-zero DNS lookup times was quite low overall.
The connect times in India varied a lot, and at times took as long as 2 seconds!
The old LinkedIn mobile stack had many redirects into different systems to determine the appropriate experience for the member, since there were a couple of independent stacks serving LinkedIn.com during that time. Depending on several factors, including SSL, there were roughly 2 to 4 redirects at times. On a slow connection (lower 3G speeds or during network congestion), the time spent in redirects was about 3.7 seconds on an average. The time to first byte (TTFB) was about 5.6 seconds.
Effect of payload and large JS bundles on performance
The payload size is directly proportional to the time to transfer the content. Without boring you with too many details, the rough amount of time it takes to transmit payloads grows exponentially, not accounting for packet loss. It takes a maximum of 1.2 seconds for every round-trip time (RTT) in India on most cellular networks. Assuming no packet loss, 13 KiB is transferred in 1 RTT, 39 KiB takes 2 RTTs, up to 91 KiB can be transmitted in the third RTT, and so on. So it’s no surprise that the lesser the payload, the faster the transfer times.
Project Bolt, a simple beginning
As the product team established a goal of a page load time of 6 seconds on a 2G/low 3G (100-150 Kbps approximately) network, the engineering team established the following tenets.
Server-side rendering (SSR)
Rendering the first view on the server gives a good perceived performance for end users and in turn improves overall member experience. SSR is also faster to render in cold cache/first visit situations, which is a good majority of the use-cases when:
Members visit the site for the first time or if the browser has expired out the asset cache—the visit can also be from an email client or a push channel.
The site has been updated recently since the member’s last visit, especially given the fact that we deploy almost daily at LinkedIn.
A smaller payload means fewer roundtrips, and hence a faster transfer time.
There was a detailed discussion and quite a lot of research around this, which is captured succinctly in this thread. While the thread is on Ember, it’s true for almost all use-cases and applies to all of the popular frameworks that are in use today.
As called out earlier, redirects are a huge performance penalty in general, and even more so in emerging markets, where bandwidth is low and network congestion is high. By removing redirects, we can save up to 3.5 seconds as compared to the legacy mobile web site.
Leverage the (modern) browser
LinkedIn Lite architecture
Since SSR was one of the architecture tenets of Lite, our choices for the libraries and frameworks we could choose were limited, since many of them support only client-side rendering of components. React could have been one of the choices if not for the library size, which was 34 KiB (compressed). Adding app code would exceed our size limits and hence fail to meet the performance goals of Lite. So we decided to go with the Play framework, an asynchronous, lightweight Java framework, which had been battle-tested at LinkedIn for years. Our templating choice became Dust.js, since LinkedIn’s Play framework played (pun unintended) nicely with Dust.js.
For managing asynchronous tasks, we decided to use Parseq, another open source library created by LinkedIn, which is an excellent choice for doing non-blocking, asynchronous task management. When combined with ParseqRestClient, it is an excellent choice for doing asynchronous I/O to downstream services. It also allows you to compose tasks, which has been one of its strengths when compared to other libraries we have used in the past.
For user interactions, we used the Fetch API, which retrieves HTML from the server and updates the DOM in the client.
We were six engineers and it took us 4 months in total from start to finish to write a lightweight LinkedIn.com that outperformed the legacy mobile site by miles. On a throttled 100 Kbps connection, the new site was loading and interactive in under 6 seconds, as per our goal. Given these improvements in site speed, it was not surprising to see a significant growth in member engagement in India—including a four-fold increase in job applications, twice as many sessions, and 3 times the overall of member engagement on LinkedIn—after Lite’s launch.
One of the most important factors for Lite’s success is that we analyzed the bottlenecks in the mobile application in India and made them the core tenets to architect a system that would not suffer the same problems. It involved working with the infrastructure teams and also understanding the entire stack from the ground up before deciding on an architecture, rather than relying on a magic sauce to solve our problems.
We will follow up with a blog post on how we transformed LinkedIn Lite into a progressive web app without sacrificing speed or needing a huge rewrite. We have adopted a very unique approach that combines SSR withservice workers for speed and transforms the SSR app into a progressive web app.
The initial team consisted of yours truly (Gopal Venkatesan), Prateek Sachdev, Kaushik Srinivasan, Snehal Mhaske, Prashant Momale, Ramitha Chitloor, Neena Jose, Lisha Prakash, and Gaurav Gupta. LinkedIn Lite wouldn’t have been possible without the initial idea from our friendly product manager Ajay Datta. Special thanks to Raghu Hiremagalur, Ganesan Venkatasubramanian, Doug Young, and Brian Geffon for providing help and guidance along the way.