In a previous post, we discussed how we test LinkedIn's mobile stack, including our Node.js mobile server. Today, we’ll tell you how we make this mobile server fast. Here are our top 10 performance takeaways for working with Node.js:
1. Avoid synchronous code
By design, Node.js is single threaded. To allow a single thread to handle many concurrent requests, you can never allow the thread to wait on a blocking, synchronous, or long running operation. A distinguishing feature of Node.js is that it was designed and implemented from top to bottom to be asynchronous. This makes it an excellent fit for evented applications.
Unfortunately, it is still possible to make synchronous/blocking calls. For example, many file system operations have both asynchronous and synchronous versions, such as writeFile and writeFileSync. Even if you avoid synchronous methods in your own code, it's still possible to inadvertently use an external library that has a blocking call. When you do, the impact on performance is dramatic.
Our initial logging implementation accidentally included a synchronous call to write to disc. This went unnoticed until we did performance testing. When benchmarking a single instance of Node.js on a developer box, this one synchronous call caused a performance drop from thousands of requests per second to just a few dozen!
2. Turn off socket pooling
The Node.js http client automatically uses socket pooling: by default, this limits you to 5 sockets per host. While the socket reuse may keep resource growth under control, it will be a serious bottleneck if you need to handle many concurrent requests that all need data from the same host. In these scenarios, it's a good idea to increase maxSockets or entirely disable socket pooling:
3. Don't use Node.js for static assets
For static assets, such as CSS and images, use a standard webserver instead of Node.js. For example, LinkedIn mobile uses nginx. We also take advantage of Content Delivery Networks (CDNs), which copy the static assets to servers around the world. This has two benefits: (1) we reduce load on our Node.js servers and (2) CDNs allow static content to be delivered from a server close to the user, which reduces latency.
4. Render on the client-side
Let's quickly compare rendering a page server-side vs. client-side. If we have Node.js render server-side, we'll send back an HTML page like this for every request:
Note that everything on this page, except for the user's name, is static: that is, it's identical for every user and page reload. So a much more efficient approach is to have Node.js return just the dynamic data needed for the page as JSON:
5. Use gzip
Most servers and clients support gzip to compress requests and responses. Make sure you take advantage of it, both when responding to clients and when making requests to remote servers:
6. Go parallel
Try to do all your blocking operations - that is, requests to remote services, DB calls, and file system access - in parallel. This will reduce latency to the slowest of the blocking operations rather than the sum of each one in sequence. To keep the callbacks and error handling clean, we use Step for flow control.
7. Go session-free
LinkedIn mobile uses the Express framework to manage the request/response cycle. Most express examples include the following configuration:
By default, session data is stored in memory, which can add significant overhead to the server, especially as the number of users grows. You could switch to an external session store, such as MongoDB or Redis, but then each request incurs the overhead of a remote call to fetch session data. Where possible, the best option is to store no state on the server-side at all. Go session free by NOT including the express config above and you'll see better performance.
8. Use binary modules
10. Keep your code small and light
Working with mobile, where devices are slower and latencies are higher, teaches you to keep your code small and light. Apply this same idea to your server code as well. Revisit your decisions from time to time and ask yourself questions like: “Do we really need this module?”, “Why are we using this framework? Is it worth the overhead?”, “Can we do this in a simpler way?”. Smaller, lighter code is usually more efficient and faster.