Open Source

LinkedIn Begins Contributing Open19 Platform to the Community

Last summer, LinkedIn helped announce the launch of the Open19 Foundation and our role as a founding member. Our aim was to assist in building a community for a new generation of open data centers and edge solutions and to one day contribute our Open19 project to the foundation. Over the last year, we’ve successfully advanced that mission both in our support of the foundation and in our advancement of the Open19 platform.

Today, in conjunction with the Open19 Summit, we are happy to announce our project with the Open19 Foundation. Our Open19 contribution follows many successful software projects including Kafka, Samza, Burrow, and many others and is the first hardware LinkedIn will share with the open source community.

In the weeks and months to come, we plan to open source every aspect of the Open19 platform—from the mechanical design to the electrical design—to enable anyone to build and create an innovative and competitive ecosystem.

Taking a look back

Two years ago, we launched the plans for the Open19 project as the first LinkedIn open hardware data center project. When we began the work, we set several critical goals that would help us determine success:

  • Create an open standard that can fit any 19” rack environment for server, storage, and networking

  • Reduce overall data center cost

  • Enable faster rack integration, both onsite and offsite

  • Simplify the data center operations

  • Build an ecosystem and a community around the technology

So, where did we end up?

The design and implementation of Open19

The final design for Open19 is a solution that defines a form factor for servers with optimized costs and operational models to support fast data center integration for a variety of solutions and technologies. It is exciting to see that all our hard work has finally paid off. We are happy to share that Open19 technology is now deployed in LinkedIn's data centers!

Over the last two years, we conducted due diligence while overcoming a few challenges that come with developing a new industry standard. It was essential that we maintain a high-quality solution that addressed not only the needs of LinkedIn but also those of the community that supports the Open19 initiative and the new community members that join us regularly. With this community-first mindset and our new collaboration with the Open Compute Project, we look forward to seeing where we can take this technology!

Let’s take a look back at each of the building blocks that have made this project a great success.

The Open19 cages
The Open19 cages come in two sizes 12RU and 8RU. They are entirely passive, and every 2RU can be converted to be half-width or full-width. The cages are straightforward, cost-effective, and foundational to the set up of the standard form factors for the Open19 technology.

open19update1

The Open19 servers form factors
The Open19 standard defines four brick form factors:

  • Brick (½ wide 1RU) 
  • Double High Half Width – (2RU)
  • Double Wide Brick (1RU)
  • Double High Brick (2RU)

open19update2

The bricks will all have linear power and data growth, meaning the bigger the data center, the more power and data available to you. The baseline brick starts with 200w in an unmanaged system limited by a maximum of 400w in a managed system. The brick will also have day-one 50GE network interface with a cable capable of up to 200G.

All of the Open19 servers are self-sustained, from EMI to safety to cooling. This means that even in an environment with no external assistance, the server will operate properly between 10 and 40C completely EMI contained and safe.

The Open19 power cable
The first generation of the Open19 cable system—power cable and data cable—has been designed to be optimized for ease of use and future proofing level for the next three to five years.

open19update3

The design of the power cable takes into consideration a maximum configuration for a typical data center of 19.2kw power feed to accommodate two leaf zones. We can take 9.6kw and create two levels of power—one for each managed systems of 200w per half-width 1RU server. We selected power connectors and pins that will handle 400w (35A@12v) per server, which enables a power-managed system to push each half-width server to 400w. As described above, that means 800w to double size and 1600w for a 2Ru system.

The power cable terminates into a high-density connector standard power connector that aggregates 12 servers, this connector leverages the same connectivity technology and mirrors the server side. The cable and connectors are all off-the-shelf with an open specification for any supplier who would like to produce it.

The Open19 data cable
The Open19 data cable has been optimized for speed and density.

open19update4

Each server is connected with four bidirectional channels rated to 50G PSM4 on day one, which enables up to 200G per half-width servers. Since the only switch we have is 3.2T capacity, we are only enabling 50G connection per server (growing linearly) to be able to support 48 servers per ToR switch. We leveraged two channels of 25G initially and the two other ports used for optional 1GE OOB network connectivity and optional console connection.

In the initial configuration, the cable will support 100G per half-width server with an option to move to 200G when needed.

The Open19 power shelf
The Open19 power shelf is a combination of open standard form factor, connectors, external mechanical configuration, and management CPUs, combined with a proprietary smart e-fusing system and off-the-shelf power modules.

open19update5

The Open19 power shelf is universal and can support most AC and DC configurations for a feed. It was designed with a standard power shelf input connector and a specific whip cable per input standard while the power modules can handle the AC and DC inputs. The shelf is generating 12v output—yes, 12v not 48v—and directly feeding the server motherboards.

For a 3+3 configuration, the power shelf can handle 9.6kw of total output power. For a 5+1 configuration, the shelf rating jumps to 15.5kw but loses the A/B feed redundancy.

All of the six module outputs are shared so any module configuration is possible based on your needs and the level of redundancy you expect from the system. The power shelf has a per server protection and monitoring function based on a dual Linux based BMC module. The Open19 power shelf is multi-sourced with a limiting factor that a supplier-specific shelf will only take the power modules from that supplier.

The Open19 optimized network switch
The Open19 platform defines a 3.2T switch to terminate the special data cable and creates the blind mate connectivity for all servers. The solution of the Open19 switch is optimized for cost-effectiveness (power supply free) and for a data path and OOB switching function. While keeping the data path and OOB switching function separate on the system from a management perspective and independent from a power supply, they share the same chassis and aggregate both the data path and the OOB function into one.

Like every other part of the Open19 platform, we have multiple sources for the Open19 switch.

The switch has the following features:

  • 3.2T Switch

  • Dual switch: data path and management (OOB)

    • 50G per server data path

    • 1G per server management (optional)

    • Console port per server (optional)

  • 12v input (no power supplies)

  • Up to 8x100G uplinks or non-open19 gear ports

  • Broadwell-DE CPU running ICOS and SONiC

  • BMC running OpenBMC code

  • Linkedin white box design (open sourced)

  • Cost-optimized

This is a first generation switch for Open19. In the second generation switch, we will follow up with a 6.4T to double the per server capacity.

The standard benefits of Open19

The Open19 standard is representing a new way of defining open hardware while setting up a common, open, and reproducible form factor. This also enables the server supplier to build IP-protected technology that will provide a variety of servers for the community. To summarize the benefits of the technology:

  • Data center operation build-out is a game changer – Build infra first for 1% of the cost load servers on demand in close to zero time

  • Fits into any 19” environment

  • Full rack and software hardware disaggregated technology

  • Fully redundant across every element standard

  • Universal power solution

  • 6x to 10x faster integration time

  • Simplified server integration

  • High power conversion efficiency – Improved PUE

  • Cost effective – Cost savings based on your environment

Where are the servers?

The Open19 open source standard does not define the servers. The servers are built by the server suppliers that support the Open19 server portfolio. For more information, please visit www.open19.org/marketplace.

A true community effort

When looking back at where we started—and the crude mechanical diagrams and whiteboard architecture discussion snapshots—we ended up with a fine-tuned solution that is very close to our original innovation but with more production finesse. For that, I would like to thank the whole Open19 ecosystem and community that worked so hard to get us to the place where it is ready for our LinkedIn data centers.

Specifically, the following teams that worked day and night to make Open19 a success: the Flex team, the Molex team, the Amphenol team, the Delta team, the Schneider team and, of course, the amazing LinkedIn team that did not stop until we reached successful, high-quality data center deployment. Finally, I'd like to thank all of our partners who worked to make sure that we have all the necessary Open19 compliant servers, storage, and networking solutions.

What’s next?

Despite these incredible advancements, we are still in our early days, so more updates will come. In the meantime, if you would like to join the Open19 ecosystem and community, please contact us through the project’s website www.open19.org.