Run the AWS CLI v2 inside Docker

Michael Wittig – 24 Jul 2020

The last time I bought a new laptop, I decided to install it from scratch. My goal: keep the installation as clean as possible. Run anything inside Docker containers. Especially the development environments. I work for many clients. I often encounter situations where I need multiple versions of the same software. Docker is of great help. But one of my favorite tools, the AWS CLI v1, was not working perfectly inside Docker. I had issues with command completion and the CodeCommit credential helper for git. A tweet by @nathankpeck motivated me to give the new AWS CLI v2 a try. In this post, I share my learnings and a working solution to run the AWS CLI v2 inside Docker without hassle.

Version 2

I assume that you use macOS Catalina and zsh (the MacOS default). You should be able to port this to Linux and Windows.

The fastest way to start the AWS CLI v2 inside Docker is this:

docker run --rm -v "$HOME/.aws:/root/.aws:rw" amazon/aws-cli ec2 describe-images

The good news, your AWS CLI config (stored in ~/.aws/) is available inside the container because of the volume mount.

The bad news:

  • The command is pretty long. You don’t want to type more than aws.
  • Command completion does not work.
  • Your files are not available inside the container. Moving something from/to S3 is not going to work.
  • Environment variables from your shell are not available inside the container.
  • If you put this command in your git config as a credential helper, it will not work.

Let’s see how we can fix this.

Restore the aws command

  1. Create the file /usr/local/bin/aws with the following content:

    #!/bin/zsh
    docker run \
    --rm \
    -v "$HOME/.aws:/root/.aws:rw" \
    amazon/aws-cli $@
  2. Make the file executable: chmod +x /usr/local/bin/aws

Your aws command will work again:

aws ec2 describe-images

Let’s add command completion.

Adding command completion

  1. Create the file /usr/local/bin/aws_completer with the following content:

    #!/bin/zsh
    docker run \
    --rm \
    -i \
    --entrypoint /usr/local/bin/aws_completer \
    -e COMP_LINE -e COMP_POINT \
    amazon/aws-cli $@
  2. Make the file executable: chmod +x /usr/local/bin/aws_completer

  3. Add the following lines to your ~/.zshrc:

    autoload -Uz compinit && compinit
    autoload -Uz bashcompinit && bashcompinit
    complete -C '/usr/local/bin/aws_completer' aws

If you type aws ec<TAB>, you will see the available commands.


Looking for a new challenge?

  • tecRacer

    Cloud Consultant • AWS Migrations

    tecRacer • Premier AWS Consulting Partner • Germany, Austria, Portugal, and Switzerland
    Assessment Transformation Change Management
  • DEMICON

    Senior Lead Full Stack Developer

    DEMICON • AWS Advanced Consulting Partner • Remote
    AWS JavaScript/TypeScript Angular React

Making our local files available

If you run a command like aws s3 cp local.file s3://my-bucket/file you will get an error: “The user-provided path local.file does not exist.” This might seem strange at the beginning because you can see the file on your local machine. The problem: the file is not available inside the container. Let’s modify /usr/local/bin/aws slightly and mount the current working directory:

#!/bin/zsh
docker run \
--rm \
-v "$HOME/.aws:/root/.aws:rw" \
-v "$(pwd):/aws:rw" \
amazon/aws-cli $@

/aws is WORKDIR of the Docker container. Therefore, we mount the local files to this directory. As long as you operate with relative paths inside your current folder (or subfolders), it works. Examples:

  • working: aws s3 cp local.file s3://my-bucket/file
  • not working: aws s3 cp ../local.file s3://my-bucket/file
  • not working: aws s3 cp /abs/local.file s3://my-bucket/file

Injecting environment variables

Sometimes, you want to use the environment variables from your machine inside the container. E.g., to set the default profile.

export AWS_PROFILE=YOUR_PROFILE_NAME
aws s3 ls # NOT WORKING!!!

Unfortunately, this does not work. Because the AWS_PROFILE environment variable is not available inside the container. Let’s modify /usr/local/bin/aws to fix this:

#!/bin/zsh
docker run \
--rm \
-v "$HOME/.aws:/root/.aws:rw" \
-v "$(pwd):/aws:rw" \
-e AWS_ACCESS_KEY_ID \
-e AWS_CA_BUNDLE \
-e AWS_CLI_FILE_ENCODING \
-e AWS_CONFIG_FILE \
-e AWS_DEFAULT_OUTPUT \
-e AWS_DEFAULT_REGION \
-e AWS_PAGER \
-e AWS_PROFILE \
-e AWS_ROLE_SESSION_NAME \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN \
-e AWS_SHARED_CREDENTIALS_FILE \
-e AWS_STS_REGIONAL_ENDPOINTS \
amazon/aws-cli $@

I added all environment variables that control the AWS CLI v2.

Connecting with CodeCommit via a git credential helper

To get access to your CodeCommit repositories, git needs to become aware of your AWS credentials.

In your project’s .git/config, add:

[remote "origin"]
url = https://git-codecommit.us-east-1.amazonaws.com/v1/repos/YOUR_REPO_NAME
fetch = +refs/heads/*:refs/remotes/origin/*
[credential]
helper =
helper = !aws --profile YOUR_PROFILE_NAME codecommit credential-helper $@
UseHttpPath = true

The empty helper = line is needed on Macs to avoid the system’s keychain to get active!

One last change to /usr/local/bin/aws is required:

#!/bin/zsh
docker run \
--rm \
-i \
-v "$HOME/.aws:/root/.aws:rw" \
-v "$(pwd):/aws:rw" \
-e AWS_ACCESS_KEY_ID \
-e AWS_CA_BUNDLE \
-e AWS_CLI_FILE_ENCODING \
-e AWS_CONFIG_FILE \
-e AWS_DEFAULT_OUTPUT \
-e AWS_DEFAULT_REGION \
-e AWS_PAGER \
-e AWS_PROFILE \
-e AWS_ROLE_SESSION_NAME \
-e AWS_SECRET_ACCESS_KEY \
-e AWS_SESSION_TOKEN \
-e AWS_SHARED_CREDENTIALS_FILE \
-e AWS_STS_REGIONAL_ENDPOINTS \
amazon/aws-cli $@

You are ready to go! I have one more highlight for you.

Exploring the new features

The main reason why I switched to the AWS CLI v2 is the support for AWS SSO. With the following command, I have access to all of my AWS accounts!

aws --profile YOUR_PROFILE_NAME sso login
aws --profile YOUR_PROFILE_NAME ec2 describe-instances

Your ~/.aws/config should look similar to this:

[profile YOUR_PROFILE_NAME]
region = eu-west-1
sso_start_url = https://YOUR_NAME.awsapps.com/start
sso_region = YOUR_REGION
sso_account_id = YOUR_ACCOUNT_ID
sso_role_name = YOUR_SSO_ROLE_NAME

As you can see, there is no ~/.aws/credentials file anymore. I don’t need to keep any AWS credentials on my machine anymore!

PS: I recommend to read through the list of breaking changes from v1 to v2.

Become a cloudonaut supporter

Michael Wittig

Michael Wittig ( Email, Twitter, or LinkedIn )

We launched the cloudonaut blog in 2015. Since then, we have published 360 articles, 50 podcast episodes, and 48 videos. It's all free and means a lot of work in our spare time. We enjoy sharing our AWS knowledge with you.

Please support us

Have you learned something new by reading, listening, or watching our content? With your help, we can spend enough time to keep publishing great content in the future. Learn more

$
Amount must be a multriply of 5. E.g, 5, 10, 15.

Thanks to Alan Leech, Alex DeBrie, ANTHONY RAITI, Christopher Hipwell, Jaap-Jan Frans, Jason Yorty, Jeff Finley, Jens Gehring, jhoadley, Johannes Grumböck, Johannes Konings, John Culkin, Jonas Mellquist, Juraj Martinka, Kamil Oboril, Ken Snyder, Markus Ellers, Ross Mohan, Ross Mohan, sam onaga, Satyendra Sharma, Shawn Tolidano, Simon Devlin, Thorsten Hoeger, Todd Valentine, Victor Grenu, and all anonymous supporters for your help! We also want to thank all supporters who purchased a cloudonaut t-shirt.