The Glimmer Binary Experience

December 7, 2017

Co-authors: Sarah Clatterbuck, Chad Hietala, and Tom Dale

A bit over a year ago, Ember.js got a major overhaul. In a tight collaboration between LinkedIn engineers and the open source community, we replaced Ember’s rendering engine with a new library, Glimmer VM, that improved performance and significantly reduced the size of compiled templates.

Glimmer treats Handlebars templates as a functional programming language and compiles them into a sequence of instructions that can be executed in the browser. These instructions, or opcodes, are encoded in a compact JSON data structure.

When we migrated our web application to use Glimmer, we saw dramatic improvements in load time. In addition to reducing over-the-wire size by 40%, compiling templates into JSON allowed us to reduce the amount of time the browser spent parsing JavaScript. Overall, this change improved load times at the 90th percentile by over one second.

In this blog, we will discuss a recent experiment to improve load times even further by entirely eliminating the cost to parse compiled templates.

Unlocking experimentation with Glimmer.js

About six months ago, the Ember.js team announced the release of Glimmer.js as a standalone component library. Breaking off the view layer empowered us to experiment with bringing all the goodness of Ember and the Glimmer VM to developers creating lighter-weight experiences, like mobile apps for emerging markets, or SEO pages.

The break out of Glimmer has unlocked a lot of experimentation by our team in the subsequent months. Recently, for example, we introduced hybrid rendering, where HTML is generated with server-side rendering (SSR) and “rehydrated” in the browser. This is just the beginning of the performance benefits afforded by Glimmer’s virtual machine architecture.

The holy grail of web performance is the ability to load quickly for first time visitors, to update quickly when the user takes action (preserving 60fps performance), and to provide performance by default, meaning that large teams with less experienced developers can build performant web apps without significant intervention.

Traditionally, there has been a tension between delivering minimal JavaScript to enable instant loads and the ability to have a sophisticated, responsive UI. It seems like a fundamental tradeoff that as an app grows larger, performance or productivity suffers. With Glimmer, our goal is to build apps that are lightweight, fast, and productive. One of the keys to achieving that goal is to reduce the cost of each new component added to an application.

Instant templates

While switching from JavaScript to JSON reduces the cost of parsing compiled templates, we’ve combined Glimmer with cutting-edge browser features to eliminate the parse step entirely.

When optimizing load time, most developers tend to focus on reducing file size to make downloads faster. In a JavaScript-based web application, however, startup performance is also impacted by the browser’s ability to parse, compile, and evaluate your code. And significantly, on a mobile device, parsing and compiling JavaScript can be 2-5x slower than on desktop computers. This step alone can easily take up a big chunk of your performance budget.

Today, the majority of frameworks compile their view layer abstraction to JavaScript functions. The cost of parsing this JavaScript is often hidden, as it sneaks up on applications slowly as they add more and more features. As mentioned above, Glimmer today compiles templates into a sequence of opcodes that are transmitted to the browser as JSON. Because the JSON grammar is much simpler than the JavaScript grammar, a JSON parser can be up to 10× faster than a JavaScript parser when parsing the same data.

But this still means that parse times will increase as template size grows, just at a slower rate. What if we could bypass the parse step altogether?

In the last few years, browsers have become really great at handling binary data. With low-level APIs like ArrayBuffer, JavaScript programs can handle binary data as fluently as their native counterparts. We took advantage of this fact to compile templates into our own bytecode format that the Glimmer virtual machine can execute directly. Similar conceptually to something like the JVM bytecode format, Glimmer bytecode is a platform-agnostic binary format that encodes the Glimmer VM’s instruction set as a bytestream of opcodes and their operands. Instead of being bottlenecked by JSON or JavaScript parsing performance, now we are only limited by the browser’s ability to copy raw bytes from the network.

Encoding Glimmer bytecode

Like many VMs, instructions in the Glimmer VM are identified by numbers. Bytecode is just an encoded sequence of these numbers. What makes Glimmer unique is that its instruction set is designed for rendering DOM in the browser.

For example, a template of <h1>Hello World</h1> would be compiled to the following JSON “wire format” at build time:

In the browser, a “last mile” compilation would turn this JSON wire format into an array of integers, one per opcode or operand:

Note that the strings in our JSON have been replaced by integers as well. That’s because we use a technique called “string interning” to de-duplicate multiple copies of the same string; here, the strings are replaced with an offset into the string constants pool. In practice, this optimization can greatly reduce file size (just imagine how many times you repeat the string div in your templates).

Originally, our bytecode encoded every operation as four 32-bit integers, where the first 32 bits described the type of operation (the opcode) and the remaining 96 bits described up to three arguments to the instruction (the operands).

While this is efficient to execute, it caused bytecode files to be larger than necessary. Because we always reserved space for three operands even though the majority of opcodes take zero or one, the program was full of empty bytes that didn't need to be there. Additionally, the Glimmer instruction set only includes 80 opcodes, so we could reduce the reserved space for an opcode to 8 bits.

Ultimately, we settled on a more compact encoding scheme that was still 16-bit aligned. The first 8 bits represent the opcode, the next 2 bits are used to encode the number of operands, and the final 6 bits are reserved for future use. Each operand, if present, is assigned an additional 16 bits.

With this encoding scheme, each instruction varies between two and six bytes, looking something like this:

This new layout reduces compiled program size by more than 50%. The “decoding” of this layout has negligible overhead, as we are simply masking and shifting bits to figure out the opcode and operand length.

Bridging the bytecode/JavaScript gap

One challenge we faced was moving the entirety of the compilation phase to build time. Previously, we were doing a “last mile” compilation of templates in the browser, once all of the application’s JavaScript had finished loading. This allowed us to connect compiled templates to JavaScript objects, like component classes, that handled things like user interaction.

The first step was ensuring that all of the compilation tiers could run in Node.js. We created a new interface, called the “bundle compiler,” that encapsulated all of the compilation tiers into a single public API that build tools can use to turn a “bundle” of templates into bytecode.

We then faced an additional problem: when compiling into bytecode, how would we “connect” that bytecode back to the right JavaScript objects at runtime? To solve this, we introduced the concept of "handles." A handle is a unique integer identifier assigned to external objects referenced in templates, like other components and helpers. During template compilation, we assign every external object a handle that is encoded in the bytecode. For example, if we see a component invocation like <UserProfile />, we might assign this component the handle of 42 (assuming we’ve seen 41 unique component invocations prior to this one).

A component invocation like this compiles into several opcodes in the Glimmer instruction set. One of those instructions is 0x003b PushComponentDefinition, which pushes a component’s JavaScript class onto the VM stack. When compiling to bytecode, this instruction would produce four bytes: 0x00 0x3b 0x01 0x2A. The first two bytes encode the PushComponentDefinition opcode. The second two bytes encode the operand, which in this case is the handle (integer 42).

So what happens when we start running this bytecode in the browser? How do we turn the integer 42 into a living, breathing JavaScript class? The trick is something we call the “external module table.” This is a small piece of generated JavaScript code that bridges the two worlds by defining a data structure that allows for efficiently exchanging handles into their corresponding JavaScript class.

In our example, we assigned UserProfile the handle of 42, so our external module table is an array where the UserProfile class is the 42nd item in the array:

While executing the bytecode, a collaborating object called a “resolver” is responsible for turning handles into JavaScript objects. Because each handle is also an offset into the array, that code is both simple and fast:

  • glimmerbinary2

A build generating .gbx (Glimmer Binary Experience) wire format, sending that as a response to the browser, and the VM rendering the heading in the browser.

Next steps

We just integrated the bytecode compiler into an internal proof-of-concept Glimmer.js application and look forward to gathering real world results in a production application soon. This will help us to gauge the real-world impact of these changes with a variety of members on different hardware, OS, browser, and bandwidth combinations.

Because Glimmer bytecode reduces file size and eliminates both parsing and “last mile” compilation costs entirely, we expect to see significant improvements to application start time, particularly on lower-end devices where CPU is the bottleneck. Perhaps even more importantly, the process of aligning both the file format and the VM internals towards a well-defined binary format unlocks a slew of exciting future experiments. In particular, our bytecode format means we’re well-positioned to investigate moving portions of the Glimmer VM into WebAssembly, reducing parsing costs and improving runtime performance even more.

We’re big fans of open source here at LinkedIn, and all of the work described above happened in the open on GitHub. If we’ve piqued your interest in Glimmer, we invite you to follow along in the Glimmer VM and Glimmer.js GitHub repositories.


Huge thanks to Chad Hietala and Tom Dale, who have been driving bytecode compilation at LinkedIn. Also, thanks to Yehuda Katz and Godfrey Chan for helping us realize this vision in the open source community.