LinkedIn’s Next-Generation Data Center Goes Live
November 16, 2016
Coauthors: Shawn Zandi, Mike Yamaguchi
Earlier this year we announced Project Altair, our massively scalable, next-generation data center design. We also announced our plans to build a new data center in Oregon, in order to be able to more reliably deliver our services to our members and customers. Today, we'd like to announce that our Oregon data center, featuring the design innovations of Project Altair, is fully live and ramped. The primary criteria when selecting the Oregon location were: procuring a direct access contract for 100% renewable energy, network diversity, expansion capabilities, and talent opportunities.
LinkedIn’s Oregon data center, hosted by Infomart, is the most technologically advanced and highly efficient data center in our global portfolio, and includes a sustainable mechanical and electrical system that is now the benchmark for our future builds. We chose to utilize the ChilledDoor from MotivAir, a rear door heat exchanger neutralizing the heat closer to the source. The advanced water side economizer cooling system communicates with outside air sensors to utilize Oregon's naturally cool temperatures, instead of using energy to create cool air. Incorporating efficient technologies such as these enables our operations to run a PUE (Power Usage Effectiveness) of 1.06 during full economization mode.
Implementing the Project Altair next-generation data center design enabled us to move to a widely distributed non-blocking fabric with uniform chipset, bandwidth, and buffering characteristics in a minimalistic and elegant design. We encourage the most minimalistic and simplistic approaches in infrastructure engineering because they are easy to understand and hence easier to scale. Moving to a unified software architecture for the whole data center allows us to run the same set of tools on both end systems (servers) and intermediate systems (network).
The shift to simplification in order to own and control our architecture motivated us to also use our own 100G Ethernet open switch platform, called Project Falco. The advantages of using our own software stack are numerous: faster time to market, uniformity, simplification in feature requirements as well as deployment, and controlling our architecture and scale, to name a few.
In addition to the infrastructure innovation mentioned earlier, our Oregon data center has been designed and deployed to use IPv6 (next generation internet protocol) from day one. This is part of our larger vision to move our entire stack to IPv6 globally in order to retire IPv4 in our existing data centers. The move to IPv6 enabled us to run our application stack and our private cloud, LPS (LinkedIn Platform as a Service), without the limitations of traditional stacks.
As we moved to a distributed security system by creating a distributed firewall and removing network load balancers, the process of preparing and migrating our site to the new data center became a complicated task. It required significant automation and code development, as well as changes to procedures and software configurations, but ultimately reduced our infrastructure costs and gave us additional operational flexibility.
Our site reliability engineers and systems engineering teams introduced a number of innovations in deployment and provisioning, which allowed us to streamline the software buildout process. This, combined with zero touch deployment, resulted in a shorter timeline and smoother process for bringing a data center live than we've ever achieved before.
We’re delighted to participate in the growth of Oregon as a next-generation sustainable technology center. The Uptime Institute has recognized LinkedIn with an Efficient IT award for our efforts! This award evaluates the management processes and organizational behaviors to achieve lasting reductions in cost, utilities, staff time, and carbon emissions for data centers.
The deployment of our new data center was a collective effort of many talented individuals across LinkedIn in several organizations. Special thanks to the LinkedIn Data Center team, Infrastructure Engineering, Production Engineering and Operations (PEO), Global Ops, AAAA team, House Security, SRE organization, and LinkedIn Engineering.