Serving Ads Beyond LinkedIn via Real-time Bidding

Coauthor: Peter Foldes, Yawen Wei

Sponsored Content is a key component of LinkedIn’s marketing solution suite, and helps advertisers reach the right audience using comprehensive targeting options via company updates. Sponsored Content is a native ad shown in members’ feeds sharing a similar look, background, and font as other items in their feed. It is one of the fastest growing products at LinkedIn since its launch in 2013.

In 2015, we decided to leverage the programmatic buying ecosystem to scale our inventory and increase user reach for advertisers through real-time bidding (RTB). What originally started as a small proof-of-concept experiment became a neat beta offering at LinkedIn.

Serving-Ads-Example

Example of a LinkedIn advertisement in a publisher’s feed, sourced through our own programmatic buying solution

What is real-time bidding (RTB)?

Advertising buyers can programmatically buy advertising inventory on a per-impression basis from publishers through advertisement exchanges. LinkedIn participates by running a marketing campaign on the advertiser’s behalf and finding their target audience and advertising opportunities on the publisher’s applications. When an ad is shown, an impression will be counted. Advertisers can buy on a per-impression basis or on a per-click basis.

Three major roles within this ecosystem include the publisher, the demand-side platform, and the ad exchange.

Publisher: These are the people or companies that publish content on digital mediums and sell advertisement spaces on them. They might be the owners of apps or websites. They make revenue by selling slots for ads.

Demand-side Platform (DSP): This is the side that runs the marketing campaigns and programmatically buys on behalf of advertisers on a per-impression basis. DSPs use different targeting features and learned bidding models to find the right user segment for the advertisers and provide the most relevant advertisements.

Ad Exchange: This refers to a platform that connects publishers and DSPs for the sale and purchase of media advertising inventory, by running an auction between DSPs for the advertisement slot.

Below is a diagram of the relationship between the publisher’s mobile app, an ad exchange, and multiple DSPs.

What are the challenges?

Latency

The exchange sends a request to the DSP and expects the result within 120 ms, including the network latency. How fast is 120 ms? Blink your eyes. The average time it takes to blink is around 100-150 ms. In the U.S., the latency between the East Coast and West Coast is about 80 ms. The latency between LinkedIn's Virginia data center (LVA) and our Texas data center (LTX) is around 40 ms. This allows us very limited time to finish all the work on our end needed to serve the ad, including conducting internal auctions to select the best candidate ads from tens of thousands of available campaigns.

Scalability

The DSP needs to be able to receive all the traffic from all of their publisher partners integrated with the exchange. For example, one of our partners handles hundreds of millions of request every day. This means that, in addition to guaranteeing a short latency, we also have to build the system to handle tens of thousands of queries per second (QPS) without crashing internal downstream services.

Complex bidding model

Keeping a “members first” philosophy in mind, LinkedIn is striving to provide the most relevant content to our members, as well as purchasing the ad exchanges’ inventories with the highest return over investment on behalf of our valued advertisers. In both views, responses such as clicks are the key metrics to be optimized. An accurate response prediction about member, ad content, and ad slot is the key to selecting the most engaging and relevant ad for a member and the most profitable ad slots for advertisers. Aside from challenges related to data sparsity and latency, the response prediction model also needs to be responsive to the fact that ad slots’ ROI may shift frequently, and that we should adjust our bid quickly to continue optimize for advertiser’s ROI.

Quality control

The exchange is integrated with thousands of publishers, and not all of those apps are of the same quality. Some publishers are more relevant to our advertisers, some are visited a lot more by LinkedIn members, and others could be affected by fraudulent activities. This leaves us with the challenge of ensuring that the inventory we buy on behalf of advertisers is of good content quality and truly reaches the user segment they are looking for.

High-level system overview

Serving ads via RTB consists of three services: TSCP-serving, TSCP-tracking, and LAX-service, which is designed for RTB.

Serving-Ads-Diagram

Let’s briefly explain how this system works, from beginning to end.

When a user scrolls through the feed of a third-party news application (the publisher), that mobile application sends a request to the publisher’s backend server to reserve ad slots. The server then sends a request to the ad exchange, providing contextual data such as the application’s mobile ID, IP address, geographic location, the app’s name, etc. The exchange broadcasts the request to all of its DSP partners and conducts an auction, usually a second price auction, for all the returned responses within certain time constraint (e.g., 120 milliseconds). The winning DSP’s ad will be shown in the feed. The winning DSP will also need to pay the ad exchange for showing one ad impression.

Challenges and solutions

Even a cursory look at the architecture of this system shows that there are many potential problems that could occur, particularly in areas such as scalability, latency, etc. Below are some of the challenges and the solutions that we have implemented to address them.

Latency: Geo-distribution of data centers, coupled with last-mile delays, gives us a very small window (e.g., <100 ms) in which to serve an ad.

  • Filter traffic upstream to keep traffic volumes low to downstream services.
  • Keep up-to-date metadata in caches—for example, currency conversion rates, fraud information, decorations, response prediction, etc.—to reduce storage I/O-induced latency.
  • Build fast reverse index lookup and parallel search in the decisioning service (explained below) to minimize request processing time.Minimize communication overhead between servers by physical network peering.

Scalability: Supporting tens of thousands of queries per second while efficiently using network and compute resources.

  • Use strong monitoring and alerting systems and run regular stress tests.
  • Ensure there are no locks in the system (optimistic strategies), and that metadata is pushed to the caches instead of pulled from busy servers.
  • Horizontally scale the number of service instances by storing common state in distributed data stores.

Complex bidding model: Matching ads to the right audience.

  • Keeping a “members first” mindset: At LinkedIn, we strive to provide the most relevant content to our members, as well as purchasing the Ad exchanges’ inventories with the highest return of investment on behalf of our valued advertisers.
  • To maximize the win-win of the situation, we designed and implemented fast feedback loops into all models with the capability of multi-model execution at any given time.
  • We utilized offline Hadoop-based systems and near real-time systems like Kafka and Samza for data mining and modeling purpose.

Quality control: Deliver high quality ads to high quality publishers.

  • Build internal filters, blacklists, and traffic throttling to control access to the ad exchange.
  • Ensure that quality scanning and associated protections are enabled at all times.

High-level online system overview

Now let’s show a high-level system overview of the online system. Serving ads via RTB consists of three major services: Decisioning service (aka Bidder), Ad Tracking service, and Exchange service.

Serving-Ads-Diagram-2

Exchange service is dedicated for RTB and is optimized to handle high QPS with low latency. It sends selected inventory opportunities to the Decisioning service, which then selects the best candidate ads from among millions of available campaigns. In order to measure performance, ad impression events and click events will be recorded in the Exchange service, as well as in the Ad tracking system.

To optimize the system to satisfy the critical latency requirement, caching and async parallel calls are widely applied in many places. In addition, network peering and keep alive connection also help to reduce latency greatly.

Summary

In this blog, we described the real-time bidding system we built to serve ads beyond LinkedIn-owned and operated websites and applications. This provides advertisers additional channels and options to have their ads reach a wider range of relevant audiences. We integrated with mobile native exchanges via real-time bidding. The system is horizontally scalable and can meet high throughput and low latency requirements. We will cover more technical details and offline systems in future articles.

We are constantly improving this solution at LinkedIn by adding in advanced bidding models, enabling additional facets for targeting relevant audiences, and monitoring the quality of publishers and inventories. We are also working on scaling this solution by integrating with more exchanges and directly partnering with quality publishers to access premium inventories.

Kudos to each and everyone who made this happen! Jie Xiao, Kaiyang Liu, Yawen Wei, Dayun Li, Mingyuan Zhong, Karim Filali, Peter Foldes, Siyu You, Anand Mundada, Divye Khilnani, Daniel Blanaru, Lance Dibble, Nuo Wang, and many more!

If you are interested in connecting with one of the leaders in this group, please email: eng.ms.ad.growth@linkedin.com.