Run the AWS CLI v2 inside Docker

Michael Wittig – 24 Jul 2020

The last time I bought a new laptop, I decided to install it from scratch. My goal: keep the installation as clean as possible. Run anything inside Docker containers. Especially the development environments. I work for many clients. I often encounter situations where I need multiple versions of the same software. Docker is of great help. But one of my favorite tools, the AWS CLI v1, was not working perfectly inside Docker. I had issues with command completion and the CodeCommit credential helper for git. A tweet by @nathankpeck motivated me to give the new AWS CLI v2 a try. In this post, I share my learnings and a working solution to run the AWS CLI v2 inside Docker without hassle.

Version 2

I assume that you use macOS Catalina and zsh (the MacOS default). You should be able to port this to Linux and Windows.

The fastest way to start the AWS CLI v2 inside Docker is this:

docker run --rm -v "$HOME/.aws:/root/.aws:rw" amazon/aws-cli ec2 describe-images

The good news, your AWS CLI config (stored in ~/.aws/) is available inside the container because of the volume mount.

The bad news:

  • The command is pretty long. You don’t want to type more than aws.
  • Command completion does not work.
  • Your files are not available inside the container. Moving something from/to S3 is not going to work.
  • Environment variables from your shell are not available inside the container.
  • If you put this command in your git config as a credential helper, it will not work.

Let’s see how we can fix this.

Restore the aws command

  1. Create the file /usr/local/bin/aws with the following content:

    #!/bin/zsh
    docker run \
    --rm \
    -v "$HOME/.aws:/root/.aws:rw" \
    amazon/aws-cli $@
  2. Make the file executable: chmod +x /usr/local/bin/aws

Your aws command will work again:

aws ec2 describe-images

Let’s add command completion.

Adding command completion

  1. Create the file /usr/local/bin/aws_completer with the following content:

    #!/bin/zsh
    docker run \
    --rm \
    -i \
    --entrypoint /usr/local/bin/aws_completer \
    -e COMP_LINE -e COMP_POINT \
    amazon/aws-cli $@
  2. Make the file executable: chmod +x /usr/local/bin/aws_completer

  3. Add the following lines to your ~/.zshrc:
    autoload -Uz compinit && compinit
    autoload -Uz bashcompinit && bashcompinit
    complete -C '/usr/local/bin/aws_completer' aws

If you type aws ec<TAB>, you will see the available commands.

Cover of Rapid Docker on AWS

Become a Docker on AWS professional!

Our book Rapid Docker on AWS is designed for DevOps engineers and web developers who want to run dockerized web applications on AWS. We lead you with many examples: From dockerizing your application to Continuous Deployment and Infrastructure as Code on AWS. No prior knowledge of Docker and AWS is required. Get the first chapter for free!

Making our local files available

If you run a command like aws s3 cp local.file s3://my-bucket/file you will get an error: “The user-provided path local.file does not exist.” This might seem strange at the beginning because you can see the file on your local machine. The problem: the file is not available inside the container. Let’s modify /usr/local/bin/aws slightly and mount the current working directory:

#!/bin/zsh
docker run \
--rm \
-v "$HOME/.aws:/root/.aws:rw" \
-v "$(pwd):/aws:rw" \
amazon/aws-cli $@

/aws is WORKDIR of the Docker container. Therefore, we mount the local files to this directory. As long as you operate with relative paths inside your current folder (or subfolders), it works. Examples:

  • working: aws s3 cp local.file s3://my-bucket/file
  • not working: aws s3 cp ../local.file s3://my-bucket/file
  • not working: aws s3 cp /abs/local.file s3://my-bucket/file

Injecting environment variables

Sometimes, you want to use the environment variables from your machine inside the container. E.g., to set the default profile.

export AWS_PROFILE=YOUR_PROFILE_NAME
aws s3 ls # NOT WORKING!!!

Unfortunately, this does not work. Because the AWS_PROFILE environment variable is not available inside the container. Let’s modify /usr/local/bin/aws to fix this:

#!/bin/zsh
docker run \
--rm \
-v "$HOME/.aws:/root/.aws:rw" \
-v "$(pwd):/aws:rw" \
-e AWS_ACCESS_KEY_ID \
-e AWS_CA_BUNDLE \
-e AWS_CLI_FILE_ENCODING \
-e AWS_CONFIG_FILE \
-e AWS_DEFAULT_OUTPUT \
-e AWS_DEFAULT_REGION \
-e AWS_PAGER \
-e AWS_PROFILE \
-e AWS_ROLE_SESSION_NAME \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN \
-e AWS_SHARED_CREDENTIALS_FILE \
-e AWS_STS_REGIONAL_ENDPOINTS \
amazon/aws-cli $@

I added all environment variables that control the AWS CLI v2.

Connecting with CodeCommit via a git credential helper

To get access to your CodeCommit repositories, git needs to become aware of your AWS credentials.

In your project’s .git/config, add:

[remote "origin"]
url = https://git-codecommit.us-east-1.amazonaws.com/v1/repos/YOUR_REPO_NAME
fetch = +refs/heads/*:refs/remotes/origin/*
[credential]
helper =
helper = !aws --profile YOUR_PROFILE_NAME codecommit credential-helper $@
UseHttpPath = true

The empty helper = line is needed on Macs to avoid the system’s keychain to get active!

One last change to /usr/local/bin/aws is required:

#!/bin/zsh
docker run \
--rm \
-i \
-v "$HOME/.aws:/root/.aws:rw" \
-v "$(pwd):/aws:rw" \
-e AWS_ACCESS_KEY_ID \
-e AWS_CA_BUNDLE \
-e AWS_CLI_FILE_ENCODING \
-e AWS_CONFIG_FILE \
-e AWS_DEFAULT_OUTPUT \
-e AWS_DEFAULT_REGION \
-e AWS_PAGER \
-e AWS_PROFILE \
-e AWS_ROLE_SESSION_NAME \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN \
-e AWS_SHARED_CREDENTIALS_FILE \
-e AWS_STS_REGIONAL_ENDPOINTS \
amazon/aws-cli $@

You are ready to go! I have one more highlight for you.

Exploring the new features

The main reason why I switched to the AWS CLI v2 is the support for AWS SSO. With the following command, I have access to all of my AWS accounts!

aws --profile YOUR_PROFILE_NAME sso login
aws --profile YOUR_PROFILE_NAME ec2 describe-instances

Your ~/.aws/config should look similar to this:

[profile YOUR_PROFILE_NAME]
region = eu-west-1
sso_start_url = https://YOUR_NAME.awsapps.com/start
sso_region = YOUR_REGION
sso_account_id = YOUR_ACCOUNT_ID
sso_role_name = YOUR_SSO_ROLE_NAME

As you can see, there is no ~/.aws/credentials file anymore. I don’t need to keep any AWS credentials on my machine anymore!

PS: I recommend to read through the list of breaking changes from v1 to v2.

Tags: aws cli docker
Michael Wittig

Michael Wittig

I’m an independent consultant, technical writer, and programming founder. All these activities have to do with AWS. I’m writing this blog and all other projects together with my brother Michael.

In 2009, we joined the same company as software developers. Three years later, we were looking for a way to deploy our software—an online banking platform—in an agile way. We got excited about the possibilities in the cloud and the DevOps movement. It’s no wonder we ended up migrating the whole infrastructure of Tullius Walden Bank to AWS. This was a first in the finance industry, at least in Germany! Since 2015, we have accelerated the cloud journeys of startups, mid-sized companies, and enterprises. We have penned books like Amazon Web Services in Action and Rapid Docker on AWS, we regularly update our blog, and we are contributing to the Open Source community. Besides running a 2-headed consultancy, we are entrepreneurs building Software-as-a-Service products.

We are available for projects.

You can contact me via Email, Twitter, and LinkedIn.

Briefcase icon
Hire me