How to move a DynamoDB table to another region or account?

Andreas Wittig – 14 Jan 2025

How to move data from one DynamoDB table to another? For example, when it becomes necessary to move a DynamoDB table to another account or region.

In the following blog post, I will discuss three different options to move DynamoDB tables by backing up and restoring data. All three options work in harmony with Infrastructure as Code tools such as Terraform and CloudFormation. Be aware that a downtime is necessary for both options to avoid data loss.

  1. Restore table from backup
  2. S3 Export and Import
  3. Copy data with DynamoDB CLI tool dynein

How to move a DynamoDB table by backing up and restoring data

Backup and Restore

The first approach is to back up and restore a DynamoDB table. AWS provides two ways to do so:

  • DynamoDB’s built-in backup and restore functionality
  • AWS Backup service

Use the AWS CLI to create an on-demand backup of the source table demo-source with DynamoDB’s built-in backup and restore functionality.

$ aws dynamodb create-backup --table-name demo-source --backup-name snapshot
{
"BackupDetails": {
"BackupArn": "arn:aws:dynamodb:eu-west-1:111111111111:table/demo-source/backup/01736522438603-834a89d1",
"BackupName": "snapshot",
"BackupSizeBytes": 0,
"BackupStatus": "CREATING",
"BackupType": "USER",
"BackupCreationDateTime": "2025-01-10T16:20:38.603000+01:00"
}
}

After the backup is complete, the following command creates a new table based on the backup.

aws dynamodb restore-table-from-backup --target-table-name demo-target --backup-arn arn:aws:dynamodb:eu-west-1:111111111111:table/demo-source/backup/01736522438603-834a89d1

Be aware, that DynamoDB’s built-in backup and restore functionality does not support restoring a table in another AWS account or region. To accomplish that, use the AWS Backup service instead.

Use the AWS CLI to create a backup with AWS Backup.

aws backup start-backup-job --backup-vault-name Default --resource-arn arn:aws:dynamodb:eu-west-1:111111111111:table/demo-source --iam-role-arn arn:aws:iam::111111111111:role/service-role/AWSBackupDefaultServiceRole
{
"BackupJobId": "0a9c3b91-9cb0-48d7-be40-a379f66921cc",
"RecoveryPointArn": "arn:aws:backup:eu-west-1:111111111111:recovery-point:39420416-139b-4c6a-9132-66869794cc82",
"CreationDate": "2025-01-10T16:35:03.365000+01:00",
"IsParent": false
}

Copy the backup to another region or account (see Backup and tag copy for details).

Then, restore into a new table named demo-target.

aws backup start-restore-job --recovery-point-arn arn:aws:backup:eu-west-1:111111111111:recovery-point:39420416-139b-4c6a-9132-66869794cc82 --metadata targetTableName=demo-target --iam-role-arn arn:aws:iam::111111111111:role/service-role/AWSBackupDefaultServiceRole

Next, import the resource into the Terraform state.

% terraform import aws_dynamodb_table.target demo-target

Importing a DynamoDB table is supported by CloudFormation as well (see Import AWS resources into a CloudFormation stack with a resource import) for details.

While this is a rock solid solution that should work with large data sets without any issues, it is a little clumsy to manually import the new table to the CloudFormation stack or Terraform state. Therefore, let’s take a look at the 2nd approach.

S3 Export and Import

The second approach for moving a DynamoDB table uses the export to S3 and import from S3 functionality of DynamoDB.

The following Terraform code snippet illustrates the procedure.

  1. Prepare the scenario by creating a aws_dynamodb_table.source table and an S3 bucket aws_s3_bucket.export to store the export.
  2. Use the aws_dynamodb_table_item resource to add some test data to the aws_dynamodb_table.source table.
  3. Use the aws_dynamodb_table_export resource to export the source table to S3 once.
  4. Use the import_table attribute when creating the new aws_dynamodb_table.target table.

# Configure Terraform
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 5.0"
}
}
}

# Configure the AWS provider
provider "aws" {
region = "eu-west-1"
}

# Create the source table
resource "aws_dynamodb_table" "source" {
name = "demo-source"
billing_mode = "PAY_PER_REQUEST"
hash_key = "id"
point_in_time_recovery {
enabled = true
}

attribute {
name = "id"
type = "S"
}
}

# Add an item to the table
resource "aws_dynamodb_table_item" "source" {
table_name = aws_dynamodb_table.source.name
hash_key = aws_dynamodb_table.source.hash_key

item = <<ITEM
{
"id": {"S": "1"},
"data": {"S": "demo"}
}
ITEM
}

# Create an S3 bucket to store the DynamoDB table export
resource "aws_s3_bucket" "export" {
bucket_prefix = "dynamodb-export-"
force_destroy = true
}

# Create an export of the DynamoDB table
resource "aws_dynamodb_table_export" "source" {
depends_on = [aws_dynamodb_table_item.source]
table_arn = aws_dynamodb_table.source.arn
s3_bucket = aws_s3_bucket.export.id
}

# Create a new table based on the data export stored on S3
resource "aws_dynamodb_table" "target" {
name = "demo-target"
billing_mode = "PAY_PER_REQUEST"
hash_key = "id"
point_in_time_recovery {
enabled = true
}

attribute {
name = "id"
type = "S"
}

import_table {
input_format = aws_dynamodb_table_export.source.export_format
input_compression_type = "GZIP"
s3_bucket_source {
bucket = aws_s3_bucket.export.id
key_prefix = "${trimsuffix(aws_dynamodb_table_export.source.manifest_files_s3_key, "/manifest-summary.json")}/data/"
}
}
}

There is one thing that bothers me about this solution. The one-time process of data migration is defined in the Terraform code, even though the migration will no longer play a role in the future.

DynamoDB CLI: dynein

The third approach is my preferred approach, at least for small data sets: dynein is a command line interface for Amazon DynamoDB written in Rust.

Use your preferred Infrastructure as Code tool to create the target table. Then use dynein to export the data from the demo-source table.

$ dy export --table demo-source --region eu-west-1 --format jsonl --output-file export.jsonl
2 items processed (10948.89 items/sec)

The export file contains all items exported from the demo-source table.

$ cat export.jsonl
{"id":"1","data":"demo"}
{"id":"2","data":"test"}

Next, import the data to the demo-target table.

$ dy import --table demo-target --region eu-west-1 --format jsonl --input-file export.jsonl
2 items processed (50419.74 items/sec)

I like the approach because it does not interfere with the infrastructure code. There are two things to watch out for: AWS might stop maintaining the dynein project, and the CLI tool might not be a good fit for large data sets.

By the way, dynein is a powerful tool to interact with DynamoDB. Here is a list of all dy commands.

admin      Admin operations such as creating/updating table or GSI
list       List tables in the region. [API: ListTables]
desc       Show detailed information of a table. [API: DescribeTable]
scan       Retrieve items in a table without any condition. [API: Scan]
get        Retrieve an item by specifying primary key(s). [API: GetItem]
query      Retrieve items that match conditions. Partition key is required. [API: Query]
put        Create a new item, or replace an existing item. [API: PutItem]
del        Delete an existing item. [API: DeleteItem]
upd        Update an existing item. [API: UpdateItem]
bwrite     Put or Delete multiple items at one time, up to 25 requests. [API: BatchWriteItem]
use        Switch target table context. After you use the command you don't need to specify table every time, but you may overwrite the target table with --table (-t) option.
config     Manage configuration files (config.yml and cache.yml) from command line
bootstrap  Create sample tables and load test data for bootstrapping
export     Export items from a DynamoDB table and save them as CSV/JSON file.
import     Import items into a DynamoDB table from CSV/JSON file.
backup     Take backup of a DynamoDB table using on-demand backup
restore    Restore a DynamoDB table from backup data
help       Print this message or the help of the given subcommand(s)

Conclusion

So, you learned about three options to move a DynamoDB table to another region or account: Backup and Restore (with DynamoDB’s built in backup functionality or AWS Backup), S3 Export and Import, or the DynamoDB CLI dynein.

How to move a DynamoDB table to another region or account?

One more thin aspect, that you should consider when moving large amounts of data, that I haven’t covered in this blog post: cost. The three different approaches are charged differently. So do the math before moving large amounts of data.

Andreas Wittig

Andreas Wittig

I’ve been building on AWS since 2012 together with my brother Michael. We are sharing our insights into all things AWS on cloudonaut and have written the book AWS in Action. Besides that, we’re currently working on bucketAV,HyperEnv for GitHub Actions, and marbot.

Here are the contact options for feedback and questions.