AWS Velocity Series: Running your application

Michael Wittig – 21 Feb 2017

There are many options when it comes to running an application on AWS. EC2 based, containerized, or serverless. Choosing the best option for your specific use case is important.

All options that I present are what I call production-ready:

  • Highly available: no single point of failure
  • Scalable: increase or decrease the number of instances based on load
  • Frictionless deployment: deliver new versions of your application automatically without downtime
  • Secure: patching operating systems and libraries frequently, follow the least privilege principle in all areas
  • Operations: provide tools like logging, monitoring and alerting to recognize and debug problems

AWS Velocity Series

Most of our clients use AWS to reduce time-to-market following an agile approach. But AWS is only one part of the solution. In this article series, I show you how we help our clients to improve velocity: the time from idea to production. Discover all posts!

Let’s start by introducing the available options to you. After that, I present a comparison table to you.

EC2 based app

An EC2 based app runs directly on a virtual machine: the EC2 instance. You can choose a flavor of Windows or Linux as your operating system. You get root access to the virtual machine so there are no limitations in what you can install and configure. But keep in mind that you are also responsible for the operating system and all installed software. Patching is your job.

By default, a single EC2 instance can not guarantee high availability. That’s why you need more than one EC2 instance. An Auto Scaling Group can manage such a fleet of EC2 instances for you. And as the name implies, the Auto Scaling Group is also a building block when it comes to scalability. To provide a stable endpoint for your users, you also need a load balancer in front of your dynamic fleet of EC2 instances.

The way you deploy software is not defined by AWS. You can download your software during the start of the virtual machine, create your own AMIs with your software backed in, or use configuration management tools to install what you need. Again, many choices but also many responsibilities.

You may miss Elastic Beanstalk or OpsWorks here. The last 4 years of using AWS in many client projectshave shown that those services come with too many limitations for running existing apps. This is different if the app is build from scratch. But if an app is build from scratch I would suggest a serverless approach!

Containerized app

A containerized app (Docker) can run on ECS: the container cluster service from AWS. ECS runs on top of EC2 instances that you have to manage. ECS makes it very easy to schedule containers in an intelligent way (e.g. zone aware) and also restarts failed containers. ECS comes with a nice integration with the load balancer. You can run all your applications on a single ECS cluster which lead to a better utilization of the underlying hardware compared to running directly on EC2.

To deploy software you need to create a Docker image, push that image to a Docker repository, and than ECS will take care of the rest. You are responsible for publishing a Docker image while AWS takes care of the rest.

Serverless app

A serverless app (e.g. API Gateway & Lambda) is operated completely by AWS. You upload your source code, and AWS will run it for you. The underlying compute infrastructure is abstracted away and you don’t have access to it any longer (e.g. no SSH possible). AWS provides you access to the logs your application generates and some metrics that you can use to debug problems of your application.

Comparison

When comparing the available options, you can take different perspectives. In this article series, you look at AWS from the velocity perspective. When your goal is to deliver fast, it is important to minimize the work that is not related to your goal: the application running in production. The table shows your responsibilities.

EC2 based Containerized Serverless
Operating system you you AWS
Runtime
e.g. Node.js, JVM
you you AWS
Web server
(e.g. express, Apache, Nginx)
you you AWS
Deployment you AWS AWS
Monitoring AWS AWS AWS
Logging you & AWS you & AWS AWS
High availability you & AWS you & AWS AWS
Scalability you & AWS you & AWS AWS
Alerting you & AWS you & AWS you & AWS
App source code you you you

It’s also important to look at the limitations:

Limitations
EC2 bases app Linux or Windows
Containerized app Docker
Serverless app Node.js, Java/JVM, Python, C#
max 5 minute execution
Linux

Decision time

Given the limitations I mentioned, I recommend that you pick the solution that minimizes your responsibilities and still fits your requirements. This should be an excellent starting point to achieve AWS Velocity.

The series continues with a deep dive into the three available options to deploy your application to AWS.

Series

AWS Velocity Cover

  1. Set the assembly line up
  2. Local development environment
  3. CI/CD Pipeline as Code
  4. Running your application (you are here)
  5. EC2 based app
    a. Infrastructure
    b. CI/CD Pipeline
  6. Containerized ECS based app
    a. Infrastructure
    b. CI/CD Pipeline
  7. Serverless app
  8. Summary

You can find the source code on GitHub.

Michael Wittig

Michael Wittig

I’ve been building on AWS since 2012 together with my brother Andreas. We are sharing our insights into all things AWS on cloudonaut and have written the book AWS in Action. Besides that, we’re currently working on bucketAV, HyperEnv for GitHub Actions, and marbot.

Here are the contact options for feedback and questions.