Open Source

Glimmer: Blazing Fast Rendering for Ember.js, Part 1

Coauthors: Chad Hietala and Sarah Clatterbuck

At LinkedIn, we use Ember.js as our client-side web application framework. This is part of the larger Pemberly architecture we employ. Historically, native applications have had a large advantage over web applications in terms of not only technical capabilities, but also in their ability to drive engagement due to extremely rich SDKs. That being said, this environment has changed pretty drastically over the past five years, as the browser has evolved from a document viewer into the world’s widest distributed application runtime. Below are some examples of functionality that one would have associated with a native experience five years ago that are now available to the web platform:

Ember’s goal is to be an SDK for the web that ties together these low-level APIs into a more productive developer experience that leads developers down a path of success. Having an opinionated tool chain not only allows you to scale up, but also to be able to completely overhaul pieces of infrastructure in a backward-compatible way without impacting application code.

By default, Ember applications construct their UIs on a user's computer instead of utilizing a string of HTML from the server.  While this is done to allow for very interactive applications post-initial render, you have to deal with reality: users on the web are network-bound, CPU-bound, and memory-bound. Because of this, app developers must pay attention to not only the payload size, but also the parse/compile and execution time. Having a framework with an opinionated architecture across applications allows us to address performance related bottlenecks in a structured manner and optimize the system holistically.

Last year, we contributed to an evolution of Ember’s rendering engine to drastically reduce payload size, CPU time, and memory pressure to further our efforts to provide a delightful browsing experience for our flagship mobile web and desktop web apps. While the team has also been working on Fastboot, a means of server-side rendering Ember applications, we still needed to optimize the runtime for future work we would like to do. Our team at LinkedIn specifically contributed to both to the development and the integration of a ground up re-write of the rendering engine under the Glimmer2 code name, now simply known as Glimmer.

Originally, the first iteration of Glimmer; now known as Glimmer1, was a thin layer that sat on top of HTMLBars. In 2016, at EmberConf, the next generation of Glimmer was announced as a ground-up rewrite that would incorporate learnings and realizations from the past five years of client-side rendering to increase the performance in this area of the framework. These learnings including:

  1. Components should become first-class primitives of the rendering engine, allowing for runtime optimizations.
  2. Templates are just a declarative way of describing a UI and because of this, they have properties like referential transparency.
  3. Templates have time-varying values, making them essentially functional reactive programs (FRP) that you can re-run to update the values and produce a new UI.
  4. Instead of push-based semantics for updating the template, the system can be modeled as a discrete pull-based system with no notion of observers or notifications.
  5. Since we are effectively designing a programming language and the underlying runtime, we should architect it using well-standing tenets in programing language implementation, such as JIT Compilers and bytecode interpreters. Having a VM architecture would allow us to more easily implement well-known optimizations such as constant folding, inlining, macro expansion, etc.
  6. Since the project would be sufficiently complex, we wanted a first-class type system. For this we chose to write Glimmer in TypeScript. In addition to making a project more maintainable, they also enforce object shape, which is an heuristic several JavaScript engines use to optimize.

With this design considerations in mind, we first had to change how templates were compiled to drastically reduce the size of code we send to a user’s browser. This involved changing the compilation stack that occurs during the build of an Ember application.

Ahead-of-Time (AoT) stack

This part of the Glimmer architecture has a lot of similar parts to the Glimmer1/HTMLBars pre-compilation stack in that it uses the Handlebars parser and spec compliant HTML Parser and Tokenizer to produce a combined Abstract Syntax Tree (AST), which is consumed by the JavaScript compiler. While the majority of the AoT Compiler stack is the same as its predecessor, it differs in that it produces a JSON structure known as the wire format instead of an executable JavaScript program. Below is a high level diagram of how the stack works.

Ember offline precompilation stack

Parsing stack

Since Glimmer still uses Handlebars as its templating language, we simply leverage Handlebars’ grammar and its parser that knows how to create a Handlebars AST. Since we can’t directly use the AST, we use DOM APIs to construct the rendered layout in the browser. This means we need to validate the HTML ahead of time to ensure we do not produce an invalid DOM. To do this, we leverage the Simple HTML Tokenizer that was written for HTMLBars. The tokenizer is a W3C spec-compliant HTML Tokenizer, which guarantees that we will produce a valid DOM at runtime. This is also has a nice developer convenience in that invalid HTML produces a compiler error that could otherwise produce unexpected runtime errors.

When we initially parse the Handlebars template, it will produce an AST that looks something like the following:

Once we produce the Handlebars AST, we enter the Program Node to traverse the AST. As we come across “ContentStatement” nodes, we tokenize the value so we can produce an AST node that represents the static HTML and any static text in that HTML.

We now have a combined AST (HTML and Handlebars) that we can use in the compiler stack to produce a more compact intermediate representation (IR).

Compiler stack

Given the combined AST, we need to compile the AST into the wire format. To do this, we first traverse the AST and refine it into a flat list of actions. These actions are simply a tuple where the first item is the operation name and the second item is the AST node.

From here, we want to refine this data structure into a more specific IR. In general, we want a set of instructions that we will be read into the Glimmer runtime. So in this case, we want to pass this sequence of instructions into the Template Compiler to produce a list of opcodes that look like the following:

You may notice that these opcodes have no notion of a "MustacheStatement"—instead, we have this "unknown" opcode. This is largely because Handlebars syntax is ambiguous during this phase of compilation in terms of what the mustache represents. While we always attempt to refine these opcodes into a more specific representation, in this specific case we cannot, so we must punt the resolution to the runtime.

Now that we have this flat list of opcodes, we use the JavaScript Compiler to produce the wire format that will be consumed at runtime. While we refer to it as the JavaScript Compiler, the wire format is actually just a blob of JSON.

In the simple case we have been working with, we have "append" and "unknown" opcodes. At runtime, we will resolve the "unknown" and then we will "append" it. In other words, “unknown” is an expression named “name” that will be evaluated and passed to “append.”

Advantages of the wire format

In Glimmer1, the JavaScript Compiler actually produced a JavaScript program; however, this program tended to be large. This lead to poor network performance and also had the other problem of being extremely hard to optimize, since the program was opaque to the runtime. By having a format that can be represented as JSON, we get three main benefits:

  • We can lazily parse the program at runtime with "JSON.parse";
  • We can begin to think about moving parts of the runtime off of the main thread;
  • We can enable adaptive runtime optimizations.

Results

There are both wire size wins and run time wins with Glimmer. The compiled template size has been greatly reduced from Glimmer1, by about 40%. Also, because we no longer compile to JavaScript but instead compile to a simple data structure, there is drastically smaller parse and  eval at runtime in the browser. When we dropped Glimmer into our flagship app, we saw between 10 - 15% improvement in Real User Metrics (RUM) at the 90th percentile in the U.S. for the initial load of the web app, depending on starting route, which is fairly substantial. We were honored to contribute to this major evolution of the Ember.js framework that also had a positive impact on our apps here at LinkedIn.

In the next blog post, we will talk about the runtime stack and some of the optimizations that have been implemented.

Acknowledgements

Thanks to Yehuda Katz for providing a technical review.

Editor’s note: For a deeper dive on Glimmer and the topics we touch on in this first installment of our blog series, come see Chad Hietala’s talk at the San Francisco Ember.js Meetup on March 21.