Gobblin' Big Data With Ease
November 25, 2014
In addition to our internal data, we also ingest data from many different external data sources. Some of these data sources are platforms themselves like Salesforce, Google, Facebook and Twitter. Other data sources include external services that we use for marketing purposes. Our external datasets tend to be an order of magnitude smaller in volume than our internal datasets, however we have far less control over their schema evolution, data format, quality and availability, which poses its own set of challenges.
Bringing all these external and internal datasets together into one central data repository for analytics (HDFS) allows for the generation of some really interesting and powerful insights that drive marketing, sales and member-facing data products.
Over the years, LinkedIn’s data management team built efficient solutions for bringing all this data into our Hadoop and Teradata based warehouse. The figure above shows the complexity of the data pipelines. Some of the solutions like our Kafka-etl (Camus), Oracle-etl (Lumos) and Databus-etl pipelines were more generic and could carry different kinds of datasets, others like our Salesforce pipeline were very specific. At one point, we were running more than 15 types of data ingestion pipelines and we were struggling to keep them all functioning at the same level of data quality, features and operability.
- The types of data sources: RDBMS, distributed NoSQL, event streams, log files, etc.;
- The types of data transport protocols : file copies over HTTP or SFTP, JDBC, REST, Kafka, Databus, vendor-specific APIs, etc.;
- The semantics of the data bundles: increments, appends, full dumps, change stream, etc.;
- The types of the data flows: batch, streaming.
We also realized there were some common patterns and requirements:
- Centralized data lake: standardized data formats, directory layouts;
- Standardized catalog of lightweight transformations: security filters, schema evolution, type conversion, etc;
- Data quality measurements and enforcement: schema validation, data audits, etc;
- Scalable ingest: auto-scaling, fault-tolerance, etc.
- Ease of operations : centralized monitoring, enforcement of SLA-s etc;
- Ease of use: self-serve on-boarding of new datasets to minimize time and involvement of engineers.
We’ve brought these demands together to form the basis for our uber-ingestion framework Gobblin. As the figure below shows, Gobblin is targeted at “gobbling in” all of LinkedIn’s internal and external datasets through a single framework.
The Road Ahead
We recently presented Gobblin at QConSF 2014 (abstract, slides). We’ve heard a lot of similar pain points around data ingestion into Hadoop, S3 etc when we talk to engineers at different companies, so we’re planning to open source Gobblin in the coming weeks to share an early version with the rest of the community. Gobblin is already processing tens of terabytes of data everyday in production. We’re currently migrating all of our external datasets and small scale internal datasets into Gobblin, helping us harden the internal APIs, the platform and operations. Early next year, we plan to migrate some of our largest internal pipelines into Gobblin as well. Stay tuned for more updates on the engineering blog!
The Gobblin Team
The Gobblin project is under active development by the Data Management team: Chavdar Botev, Henry Cai, Kenneth Goodhope, Lin Qiao, Min Tu, Narasimha Reddy, Sahil Takiar and Yinan Li, with technical guidance from Shirshanka Das. The Data Management team is part of the Data Analytics Infrastructure organization led by Kapil Surlaker and Greg Arnold.
 Some of this data is directly ingested from external sources, the remainder is comprised of data brought in by other pipelines and processed regularly by Gobblin for privacy compliance purposes.