At LinkedIn, our vision is to create economic opportunities for the global workforce. In order to deliver on this vision, we conduct a variety of applied research on the rich data that flows through our systems. However, effectively leveraging our data requires both technical mastery of big data distributed systems and business domain expertise - two disparate fields with little in common. To address this challenge, the Big Data Engineering (BDE) vertical teams span a wide range of focus areas inside the company - from marketing to talent and sales to consumer analytics - to propose and develop data products for our data scientists, analysts, and business stakeholders.

On one hand, we advance and deploy the latest technologies to serve LinkedIn and the global workforce. On the other, we work closely with business stakeholders to ensure our data products deliver high value and impact.

big_data_verticals_flow_chart

Our technology stack consists of a variety of distributed platforms. We utilize both open-source and proprietary frameworks for large scale data processing including Hadoop, HDFS, Hive and Spark, Gobblin and Kafka for ingestion, Azkaban and Dr. Elephant for workflow management, in addition to other applications.

We’re proud of the work the we do and are always on the lookout for passionate engineers to join our team. Some of the recent work we drove included building data pipelines to run machine line and natural language processing analysis across millions of LinkedIn member posts each day, enabling a daily dashboard with key business metrics for our executives, and developing hundreds of datasets that power two of our most important company-wide business metrics.

 

photo_of_team_playing_game_at_offsite
photo_of_team