Serverless Big Data pipeline on AWS

Andreas Wittig – 14 Jul 2016

Lambda is a powerful tool when integrating different services on AWS. During the last months, I’ve successfully used serverless architectures to build Big Data pipelines. And I’d like to share my learnings with you.

The benefits of serverless pipeline are:

  • No need to manage a fleet of EC2 instances.
  • Highly scalable.
  • Paid per execution.

A Big Data pipeline is moving data between data sources and data targets. Often called an ETL process (extract, transform, load) as well. The following figure describes a typical serverless Big Data pipeline:

Big Data Pipeline with AWS Lambda

Use Cases

Using Lambda to implement your Big Data pipeline is especially useful if you need to transform or filter data during moving from data source to data target.

Typical use cases:

  • Load CloudFront and ELB logs from S3, transform and filter data, insert into Elasticsearch cluster.
  • Load business reports from S3, transform and filter data, insert into Redshift.
  • Load event data from Kinesis stream, transform and filter data, store on S3 for further processing.

Other use cases are possible as well. Changed data (S3 and DynamoDB), external events, or a schedule (CloudWatch Event Rule) are able to trigger a Lambda function. A Lambda can access data sources and targets connected to the Internet or VPC.

Seems like there almost no limits?


Lambda is a powerful tool, but compared to an EC2 instance there are limitations as well. Limitations when building a serverless Big Data pipeline:

  • Maximum execution duration: 300 seconds
  • Maximum memory: 1536 MB
  • Ephemeral disk capacity: 512 MB

Real world example:

  1. Load CSV file from S3.
  2. Unzip data.
  3. Transform data.
  4. Zip data.
  5. Upload to S3.

About 800 MB of unzipped data. Implementing a Lambda function following the asynchronous model of Node.js is not possible as there is neither enough memory nor disk capacity to hold the unzipped data as well as the transformed data at once.

Solution: Data Streaming

Using streaming instead of linear execution allows you to extract, transform, and load data in chunks from the beginning to the end of the pipeline.

The following source code contains an example implementing a stream for the described scenario in Node.js:

  1. Load csv.tgz file from S3.
  2. Unzip data.
  3. Split at the end of the line.
  4. Transform data.
  5. Zip data.
  6. Upload file to S3.
var AWS = require("aws-sdk");
var zlib = require("zlib");
var split = require("split");
var transform = require("stream-transform");

var sourceBucket = "BUCKET_NAME";
var sourceKey = "KEY";
var targetBucket = "BUCKET_NAME";
var targetKey = "KEY";

var s3 = new AWS.S3();

var transformer = transform(function(record, callback) {
// TODO transform
callback(null, record);

var pipeline = s3.getObject({ // (1)
Bucket: sourceBucket,
Key: sourceKey
.pipe(zlib.createGunzip()) // (2)
.pipe(split()) // (3)
.pipe(transformer) // (4)
.pipe(zlib.createGzip()); // (5)

// (6)
s3.upload({"Bucket": targetBucket, "Key": targetKey, "Body": pipeline}, function(err) {
if (err) {

This approach allows you to process data without hitting the memory or disk space limitations. Of course, the maximum execution duration of 300 seconds is still limiting the maximum throughput of your serverless data pipeline. If you are hitting the limit, you need to split your data into smaller chunks.

Andreas Wittig

Andreas Wittig

I’m the author of Amazon Web Services in Action. I work as a software engineer, and independent consultant focused on AWS and DevOps.

You can contact me via Email, Twitter, and LinkedIn.

Briefcase icon
Hire me
Cover of Rapid Docker on AWS

New book: Rapid Docker on AWS

A rapid way to get your web application up and running on AWS. Made for web developers and DevOps engineers who want to dockerize their web applications and run their containers on Amazon Web Services. Prior knowledge of Docker and AWS is not required.

Buy icon
Buy now
Marbot Logo

Incident Management for Slack

Team up to solve incidents with our chatbot marbot. Never miss a critical alert. Escalate alerts from your AWS infrastructure among your team members. Strong integrations with all parts of your AWS infrastructure: CloudWatch, Elastic Beanstalk, RDS, EC2, ...

Slack icon
Try for free
🎉 Rapid Docker on AWS out now
Learn to bundle your application into Docker containers and run them on AWS. Get the eBook or join the Online Seminar.