AWS Velocity Series: Containerized ECS based app infrastructure

EC2 Container Service (ECS) is a highly scalable, fast, container management service that makes it easy to run, stop, and manage Docker containers on a cluster of Amazon EC2 instances. To run an application on ECS, you need the following components:

  • Docker image
  • ECS cluster: EC2 instances running Docker and the ECS agent
  • ECS service: Managed Docker containers on the ECS cluster

ECS provides the ability to run a production-ready application on EC2 with reduced responsibilities and increased deployment speed compared to the EC2 based approach. By production-ready, I mean:

  • Highly available: no single point of failure
  • Scalable: increase or decrease the number of instances/containers based on load
  • Frictionless deployment: deliver new versions of your application automatically without downtime
  • Secure: patching operating systems and libraries frequently, follow the least privilege principle in all areas
  • Operations: provide tools like logging, monitoring and alerting to recognize and debug problems

The overall architecture will consist of a load balancer, forwarding requests to containers running on multiple EC2 instances, distributed among different availability zones (data centers).

ECS based app architecture

The diagram was created with Cloudcraft - Visualize your cloud architecture like a pro.

AWS Velocity Series

Most of our clients use AWS to reduce time-to-market following an agile approach. But AWS is only one part of the solution. In this article series, I show you how we help our clients to improve velocity: the time from idea to production. Discover all posts!

Let’s start with the needed infrastructure. The ECS cluster and the service.

ECS cluster

The ECS cluster is a fleet of EC2 instances with the ECS agent and Docker installed. The ECS cluster is responsible for scheduling the work (containers) to the EC2 instances.

ECS cluster architecture

The diagram was created with Cloudcraft - Visualize your cloud architecture like a pro.

Your Docker containers will run on those EC2 instances. You don’t need to care about the ECS cluster too much. We provide you a free and production-ready CloudFormation template. Please setup the ECS cluster now if you want to setup the scenario. AWS charges will likely occur! The ECS cluster need to run in a VPC, so if you don’t have a VPC stack based of our Free Templates for AWS CloudFormation (https://github.com/widdix/aws-cf-templates/tree/master/vpc) create a VPC stack first.

This first step was easy. Now you learn to use the ECS cluster with ECS services.

ECS service

The ECS service is responsible for launching Docker containers in the cluster. The service also makes sure that failed containers are replaced, and it also takes care about performing rolling updates for you. You also need a load balancer to route traffic to the containers. The following ECS service template is based on free and production-ready CloudFormation template.

Load balancer

You can follow step by step or get the full source code here: https://github.com/widdix/aws-velocity

Create a file infrastructure/ecs.yml. The first part of the file contains the load balancer. To fully describe an Application Load Balancer, you need:

  • A Security Group that allows traffic on port 80
  • The lApplication Load Balancer itself
  • A Target Group, which is a group of EC2 instances. Each of them runs containers that can receive traffic from the load balancer
  • A Listener, which wires the load balancer together with the target group and defines the listening port

Watch out for comments with more detailed information in the code.

infrastructure/ecs.ymlGitHub
---
AWSTemplateFormatVersion: '2010-09-09'
Description: 'ECS: service that runs on an ECS cluster based on ecs/cluster.yaml and uses a dedicated ALB, a cloudonaut.io template'
Parameters:
# You can reuse a VPC for multiple applications. In this case, we use one of our Free Templates for AWS CloudFormation (https://github.com/widdix/aws-cf-templates/tree/master/vpc).
ParentVPCStack:
Description: 'Stack name of parent VPC stack based on vpc/vpc-*azs.yaml template.'
Type: String
ParentClusterStack:
Description: 'Stack name of parent Cluster stack based on ecs/cluster.yaml template.'
Type: String
Resources:
# Allow traffic from the load balancer to the EC2 instances in the cluster. This is only necessary because we reuse the cluster template in the way it is!
SecurityGroupInALB:
Type: 'AWS::EC2::SecurityGroupIngress'
Properties:
GroupId:
'Fn::ImportValue': !Sub '${ParentClusterStack}-SecurityGroup'
IpProtocol: tcp
FromPort: 0
ToPort: 65535
SourceSecurityGroupId: !Ref ALBSecurityGroup
# The load balancer accepts HTTP traffic. Therefore the firewall must allow incoming traffic on port 80.
ALBSecurityGroup:
Type: 'AWS::EC2::SecurityGroup'
Properties:
GroupDescription: 'ecs-cluster-alb'
VpcId:
'Fn::ImportValue': !Sub '${ParentVPCStack}-VPC'
SecurityGroupIngress:
- CidrIp: '0.0.0.0/0'
FromPort: 80
ToPort: 80
IpProtocol: tcp
# The load balancer needs to run in public subnets because our users should be able to access the app from the Internet.
LoadBalancer:
Type: 'AWS::ElasticLoadBalancingV2::LoadBalancer'
Properties:
Scheme: 'internet-facing'
SecurityGroups:
- !Ref ALBSecurityGroup
Subnets:
- 'Fn::ImportValue': !Sub '${ParentVPCStack}-SubnetAPublic'
- 'Fn::ImportValue': !Sub '${ParentVPCStack}-SubnetBPublic'
# A target group groups a bunch of backend instances that receive traffic from the load balancer. the health check ensures that only working backends are used.
DefaultTargetGroup:
Type: 'AWS::ElasticLoadBalancingV2::TargetGroup'
Properties:
HealthCheckIntervalSeconds: 15
HealthCheckPath: '/1'
HealthCheckProtocol: HTTP
HealthCheckTimeoutSeconds: 10
HealthyThresholdCount: 2
Matcher:
HttpCode: '200'
Port: 80
Protocol: HTTP
UnhealthyThresholdCount: 4
VpcId:
'Fn::ImportValue': !Sub '${ParentVPCStack}-VPC'
# The load balancer should listen on port 80 for HTTP traffic
HttpListener:
Type: 'AWS::ElasticLoadBalancingV2::Listener'
Properties:
DefaultActions:
- TargetGroupArn: !Ref DefaultTargetGroup
Type: forward
LoadBalancerArn: !Ref LoadBalancer
Port: 80
Protocol: HTTP
# A CloudFormation stack can return information that is needed by other stacks or scripts.
Outputs:
DNSName:
Description: 'The DNS name for the ECS cluster/service load balancer.'
Value: !GetAtt 'LoadBalancer.DNSName'
Export:
Name: !Sub '${AWS::StackName}-DNSName'
URL:
Description: 'URL to the ECS service.'
Value: !Sub 'http://${LoadBalancer.DNSName}'
Export:
Name: !Sub '${AWS::StackName}-URL'

But how do you get notified if something goes wrong? Let’s add a parameter to the Parameters section to make the receiver configurable:

infrastructure/ecs.ymlGitHub
AdminEmail:
Description: 'The email address of the admin who receives alerts.'
Type: String

Alerts are triggered by a CloudWatch Alarm which can send an alert to an SNS topic. You can subscribe to this topic via an email address to receive the alerts. Let’s create an SNS topic and two alarms in the Resources section:

infrastructure/ecs.ymlGitHub
# A SNS topic is used to send alerts via Email to the value of the AdminEmail parameter
Alerts:
Type: 'AWS::SNS::Topic'
Properties:
Subscription:
- Endpoint: !Ref AdminEmail
Protocol: email
# This alarm is triggered, if the load balancer responds with 5XX status codes
LoadBalancer5XXAlarm:
Type: 'AWS::CloudWatch::Alarm'
Properties:
EvaluationPeriods: 1
Statistic: Sum
Threshold: 0
AlarmDescription: 'Load balancer responds with 5XX status codes.'
Period: 60
AlarmActions:
- !Ref Alerts
Namespace: 'AWS/ApplicationELB'
Dimensions:
- Name: LoadBalancer
Value: !GetAtt 'LoadBalancer.LoadBalancerFullName'
ComparisonOperator: GreaterThanThreshold
MetricName: HTTPCode_ELB_5XX_Count
# This alarm is triggered, if the backend responds with 5XX status codes
LoadBalancerTargetGroup5XXAlarm:
Type: 'AWS::CloudWatch::Alarm'
Properties:
EvaluationPeriods: 1
Statistic: Sum
Threshold: 0
AlarmDescription: 'Load balancer target responds with 5XX status codes.'
Period: 60
AlarmActions:
- !Ref Alerts
Namespace: 'AWS/ApplicationELB'
Dimensions:
- Name: LoadBalancer
Value: !GetAtt 'LoadBalancer.LoadBalancerFullName'
ComparisonOperator: GreaterThanThreshold
MetricName: HTTPCode_Target_5XX_Count

Let’s recap what you implemented: A load balancer with a firewall rule that allows traffic on port 80. In the case of 5XX status codes, you will receive an Email. But the load balancer alone is not enough. Now it’s time to add the ECS service.

ECS service

I already talked about the ECS service. It will take care of your containers. To be more precise, it will take care of your tasks that run in the ECS cluster. One task can contain one or multiple Docker containers. Three resources are needed:

  • A Task Definition that describes the Docker containers (similar to a Docker Compose file, or a Kubernetes Deployment)
  • An IAM Role for your container, so you don’t need to pass in static credentials when you want to interact with AWS from within your container. If you don’t want to make AWS API calls from your container the role is not needed.
  • The ECS Service itself.

To make things parameterizable, you also need to add a few parameters to the Parameters section:

infrastructure/ecs.ymlGitHub
# Where does this Docker image comes from? It will be created in the pipeline!
Image:
Description: 'The image to use for a container, which is passed directly to the Docker daemon. You can use images in the Docker Hub registry or specify other repositories (repository-url/image:tag).'
Type: String
DesiredCount:
Description: 'The number of simultaneous tasks, which you specify by using the TaskDefinition property, that you want to run on the cluster.'
Type: Number
Default: 2
ConstraintDescription: 'Must be >= 1'
MinValue: 1
MaxCapacity:
Description: 'The maximum number of simultaneous tasks, that you want to run on the cluster.'
Type: Number
Default: 4
ConstraintDescription: 'Must be >= 1'
MinValue: 1
MinCapacity:
Description: 'The minimum number of simultaneous tasks, that you want to run on the cluster.'
Type: Number
Default: 2
ConstraintDescription: 'Must be >= 1'
MinValue: 1

Now you can describe the resources in the CloudFormation template.

infrastructure/ecs.ymlGitHub
TaskDefinition:
Type: 'AWS::ECS::TaskDefinition'
Properties:
Family: !Ref 'AWS::StackName'
NetworkMode: bridge
ContainerDefinitions:
- Name: main
Image: !Ref Image # This is where the Docker image is configured
Memory: 128
PortMappings:
- ContainerPort: 3000 # The image exposes the app on port 3000
Protocol: tcp
Essential: true
LogConfiguration:
LogDriver: awslogs
Options:
'awslogs-region': !Ref 'AWS::Region'
'awslogs-group':
'Fn::ImportValue': !Sub '${ParentClusterStack}-LogGroup'
'awslogs-stream-prefix': !Ref 'AWS::StackName'
# The role is using the managed policy AmazonEC2ContainerServiceRole
ServiceRole:
Type: 'AWS::IAM::Role'
Properties:
ManagedPolicyArns:
- 'arn:aws:iam::aws:policy/service-role/AmazonEC2ContainerServiceRole'
AssumeRolePolicyDocument:
Version: '2008-10-17'
Statement:
- Effect: Allow
Principal:
Service: 'ecs.amazonaws.com'
Action: 'sts:AssumeRole'
Service:
Type: 'AWS::ECS::Service'
DependsOn: HttpListener
Properties:
Cluster:
'Fn::ImportValue': !Sub '${ParentClusterStack}-Cluster'
DeploymentConfiguration: # This is the configuration for the rolling update
MaximumPercent: 200
MinimumHealthyPercent: 50
DesiredCount: !Ref DesiredCount
LoadBalancers:
- ContainerName: main
ContainerPort: 3000 # The image exposes the app on port 3000
TargetGroupArn: !Ref DefaultTargetGroup
Role: !GetAtt 'ServiceRole.Arn'
TaskDefinition: !Ref TaskDefinition

Let’s recap what you implemented: A Task definition to define the containers that are managed by the service, an IAM role that is accessible inside the containers, and the ECS service that used the task definition to launch tasks (a bunch of containers) in the cluster. Logs from the containers are already shipped to CLoudWatch Logs by the awslogs log driver and are visible in the Log Group that is part of the ECS cluster template.

Two things are missing:

  1. Scalability of containers
  2. Alerting if containers have issues

Let’s tackle those issues step by step.

Auto scaling of containers

If you auto scale the number of containers, your ECS cluster must be able to auto scale as well. if you use our free template on GitHub, the cluster will auto scale as well

Auto Scaling works similar compared to the EC2 based approach. To scale based on the load you need to add:

  • Scaling Policies to define what should happen if the system should scale up/down
  • CloudWatch Alarms to trigger a Scaling Policy based on a metric such as CPU utilization
  • Additionally, you need a so-called Scalable Target. The Scalable Target can be an ECS service, but it could also be a Spot Fleet or an EMR Instance Group.

Again, you have to add those resources to the Resources section of your template:

infrastructure/ecs.ymlGitHub
ScalableTargetRole: # based on http://docs.aws.amazon.com/AmazonECS/latest/developerguide/autoscale_IAM_role.html
Type: 'AWS::IAM::Role'
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service: 'application-autoscaling.amazonaws.com'
Action: 'sts:AssumeRole'
Path: '/'
Policies:
- PolicyName: ecs
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'ecs:DescribeServices'
- 'ecs:UpdateService'
Resource: '*'
- PolicyName: cloudwatch
PolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Action:
- 'cloudwatch:DescribeAlarms'
Resource: '*'
ScalableTarget:
Type: 'AWS::ApplicationAutoScaling::ScalableTarget'
Properties:
MaxCapacity: !Ref MaxCapacity
MinCapacity: !Ref MinCapacity
ResourceId: !Sub
- 'service/${Cluster}/${Service}'
- Cluster:
'Fn::ImportValue': !Sub '${ParentClusterStack}-Cluster'
Service: !GetAtt 'Service.Name'
RoleARN: !GetAtt 'ScalableTargetRole.Arn'
ScalableDimension: 'ecs:service:DesiredCount'
ServiceNamespace: ecs
ScaleUpPolicy:
Type: 'AWS::ApplicationAutoScaling::ScalingPolicy'
Properties:
PolicyName: !Sub '${AWS::StackName}-scale-up'
PolicyType: StepScaling
ScalingTargetId: !Ref ScalableTarget
StepScalingPolicyConfiguration:
AdjustmentType: PercentChangeInCapacity
Cooldown: 300
MinAdjustmentMagnitude: 1
StepAdjustments:
- MetricIntervalLowerBound: 0
ScalingAdjustment: 25
ScaleDownPolicy:
Type: 'AWS::ApplicationAutoScaling::ScalingPolicy'
Properties:
PolicyName: !Sub '${AWS::StackName}-scale-down'
PolicyType: StepScaling
ScalingTargetId: !Ref ScalableTarget
StepScalingPolicyConfiguration:
AdjustmentType: PercentChangeInCapacity
Cooldown: 300
MinAdjustmentMagnitude: 1
StepAdjustments:
- MetricIntervalLowerBound: 0
ScalingAdjustment: -25
CPUUtilizationHighAlarm:
Type: 'AWS::CloudWatch::Alarm'
Properties:
AlarmDescription: 'Service is running out of CPU'
Namespace: 'AWS/ECS'
Dimensions:
- Name: ClusterName
Value:
'Fn::ImportValue': !Sub '${ParentClusterStack}-Cluster'
- Name: ServiceName
Value: !GetAtt 'Service.Name'
MetricName: CPUUtilization
ComparisonOperator: GreaterThanThreshold
Statistic: Average
Period: 60
EvaluationPeriods: 1
Threshold: 70
AlarmActions:
- !Ref ScaleUpPolicy
CPUUtilizationLowAlarm:
Type: 'AWS::CloudWatch::Alarm'
Properties:
AlarmDescription: 'Service is wasting CPU'
Namespace: 'AWS/ECS'
Dimensions:
- Name: ClusterName
Value:
'Fn::ImportValue': !Sub '${ParentClusterStack}-Cluster'
- Name: ServiceName
Value: !GetAtt 'Service.Name'
MetricName: CPUUtilization
ComparisonOperator: LessThanThreshold
Statistic: Average
Period: 60
EvaluationPeriods: 1
Threshold: 30
AlarmActions:
- !Ref ScaleDownPolicy

The number of tasks is now increased if the CPU utilization of the service goes above 70%, while the number of tasks is decreased if the CPU utilization falls below 30%.

Monitoring

Last but not least, you have to add CloudWatch Alarms to get alerted if something is wrong with your service. In the Resources section:

infrastructure/ecs.ymlGitHub
# Sends an alert if the average CPU load of the past 5 minutes is higher than 85%
CPUTooHighAlarm:
Type: 'AWS::CloudWatch::Alarm'
Properties:
AlarmDescription: 'Service is running out of CPU'
Namespace: 'AWS/ECS'
Dimensions:
- Name: ClusterName
Value:
'Fn::ImportValue': !Sub '${ParentClusterStack}-Cluster'
- Name: ServiceName
Value: !GetAtt 'Service.Name'
MetricName: CPUUtilization
ComparisonOperator: GreaterThanThreshold
Statistic: Average
Period: 300
EvaluationPeriods: 1
Threshold: 85
AlarmActions:
- !Ref Alerts

The infrastructure is ready now. Read the next part of the series to learn how to setup the CI/CD pipeline to deploy the ECS based app.

Series

The AWS Velocity series is split into readable chunks. Subscribe to our RSS feed or newsletter to get notified when new articles become available.

AWS Velocity Cover

  1. Set the assembly line up
  2. Local development environment
  3. CI/CD Pipeline as Code
  4. Running your application
  5. EC2 based app
    a. Infrastructure
    b. CI/CD Pipeline
  6. Containerized ECS based app
    a. Infrastructure (you are here)
    b. CI/CD Pipeline
  7. Serverless app
  8. Summary (coming soon)

You can find the source code on GitHub.

Published on


Subscribe for free