Glimmer: Blazing Fast Rendering for Ember.js, Part 1
March 16, 2017
At LinkedIn, we use Ember.js as our client-side web application framework. This is part of the larger Pemberly architecture we employ. Historically, native applications have had a large advantage over web applications in terms of not only technical capabilities, but also in their ability to drive engagement due to extremely rich SDKs. That being said, this environment has changed pretty drastically over the past five years, as the browser has evolved from a document viewer into the world’s widest distributed application runtime. Below are some examples of functionality that one would have associated with a native experience five years ago that are now available to the web platform:
- Web push notifications
- Add to home screen
- Offline first
- Manual memory management
- Interacting With Bluetooth devices
Ember’s goal is to be an SDK for the web that ties together these low-level APIs into a more productive developer experience that leads developers down a path of success. Having an opinionated tool chain not only allows you to scale up, but also to be able to completely overhaul pieces of infrastructure in a backward-compatible way without impacting application code.
By default, Ember applications construct their UIs on a user's computer instead of utilizing a string of HTML from the server. While this is done to allow for very interactive applications post-initial render, you have to deal with reality: users on the web are network-bound, CPU-bound, and memory-bound. Because of this, app developers must pay attention to not only the payload size, but also the parse/compile and execution time. Having a framework with an opinionated architecture across applications allows us to address performance related bottlenecks in a structured manner and optimize the system holistically.
Last year, we contributed to an evolution of Ember’s rendering engine to drastically reduce payload size, CPU time, and memory pressure to further our efforts to provide a delightful browsing experience for our flagship mobile web and desktop web apps. While the team has also been working on Fastboot, a means of server-side rendering Ember applications, we still needed to optimize the runtime for future work we would like to do. Our team at LinkedIn specifically contributed to both to the development and the integration of a ground up re-write of the rendering engine under the Glimmer2 code name, now simply known as Glimmer.
Originally, the first iteration of Glimmer; now known as Glimmer1, was a thin layer that sat on top of HTMLBars. In 2016, at EmberConf, the next generation of Glimmer was announced as a ground-up rewrite that would incorporate learnings and realizations from the past five years of client-side rendering to increase the performance in this area of the framework. These learnings including:
- Components should become first-class primitives of the rendering engine, allowing for runtime optimizations.
- Templates are just a declarative way of describing a UI and because of this, they have properties like referential transparency.
- Templates have time-varying values, making them essentially functional reactive programs (FRP) that you can re-run to update the values and produce a new UI.
- Instead of push-based semantics for updating the template, the system can be modeled as a discrete pull-based system with no notion of observers or notifications.
- Since we are effectively designing a programming language and the underlying runtime, we should architect it using well-standing tenets in programing language implementation, such as JIT Compilers and bytecode interpreters. Having a VM architecture would allow us to more easily implement well-known optimizations such as constant folding, inlining, macro expansion, etc.
With this design considerations in mind, we first had to change how templates were compiled to drastically reduce the size of code we send to a user’s browser. This involved changing the compilation stack that occurs during the build of an Ember application.
Ahead-of-Time (AoT) stack
Since Glimmer still uses Handlebars as its templating language, we simply leverage Handlebars’ grammar and its parser that knows how to create a Handlebars AST. Since we can’t directly use the AST, we use DOM APIs to construct the rendered layout in the browser. This means we need to validate the HTML ahead of time to ensure we do not produce an invalid DOM. To do this, we leverage the Simple HTML Tokenizer that was written for HTMLBars. The tokenizer is a W3C spec-compliant HTML Tokenizer, which guarantees that we will produce a valid DOM at runtime. This is also has a nice developer convenience in that invalid HTML produces a compiler error that could otherwise produce unexpected runtime errors.
When we initially parse the Handlebars template, it will produce an AST that looks something like the following:
Once we produce the Handlebars AST, we enter the Program Node to traverse the AST. As we come across “ContentStatement” nodes, we tokenize the value so we can produce an AST node that represents the static HTML and any static text in that HTML.
We now have a combined AST (HTML and Handlebars) that we can use in the compiler stack to produce a more compact intermediate representation (IR).
Given the combined AST, we need to compile the AST into the wire format. To do this, we first traverse the AST and refine it into a flat list of actions. These actions are simply a tuple where the first item is the operation name and the second item is the AST node.
From here, we want to refine this data structure into a more specific IR. In general, we want a set of instructions that we will be read into the Glimmer runtime. So in this case, we want to pass this sequence of instructions into the Template Compiler to produce a list of opcodes that look like the following:
You may notice that these opcodes have no notion of a "MustacheStatement"—instead, we have this "unknown" opcode. This is largely because Handlebars syntax is ambiguous during this phase of compilation in terms of what the mustache represents. While we always attempt to refine these opcodes into a more specific representation, in this specific case we cannot, so we must punt the resolution to the runtime.
In the simple case we have been working with, we have "append" and "unknown" opcodes. At runtime, we will resolve the "unknown" and then we will "append" it. In other words, “unknown” is an expression named “name” that will be evaluated and passed to “append.”
Advantages of the wire format
- We can lazily parse the program at runtime with "JSON.parse";
- We can begin to think about moving parts of the runtime off of the main thread;
- We can enable adaptive runtime optimizations.
In the next blog post, we will talk about the runtime stack and some of the optimizations that have been implemented.
Thanks to Yehuda Katz for providing a technical review.
Editor’s note: For a deeper dive on Glimmer and the topics we touch on in this first installment of our blog series, come see Chad Hietala’s talk at the San Francisco Ember.js Meetup on March 21.