Official AMI Images

List of AMI images for each AWS region:
RegionVersionInstance TypeArchitectureAMI
ca-central-1 v0.9.0 hvmamd64ami-062996f75fa0a8abd
ap-northeast-1 v0.9.3 hvmamd64ami-025b1033e7e6516d0
ap-northeast-1 v0.9.3 hvmarm64ami-075b312f396ebe8cc
ap-northeast-2 v0.9.3 hvmamd64ami-0130fdee881df448b
ap-northeast-2 v0.9.3 hvmarm64ami-0598fbe460580cbd2
ap-northeast-3 v0.9.3 hvmamd64ami-0f25b8442937c089c
ap-northeast-3 v0.9.3 hvmarm64ami-0456217d630aaef50
ap-south-1 v0.9.3 hvmamd64ami-04c5558fabe7ac261
ap-south-1 v0.9.3 hvmarm64ami-0969d9c2051b96e58
ap-southeast-1 v0.9.3 hvmarm64ami-0d82959d393ff3d08
ap-southeast-1 v0.9.3 hvmamd64ami-018077171e5ece3fb
ap-southeast-2 v0.9.3 hvmamd64ami-026ea2c8016fc368a
ap-southeast-2 v0.9.3 hvmarm64ami-008e873a2cff0e1dd
ca-central-1 v0.9.3 hvmarm64ami-028fdd24d0f5cd6fe
ca-central-1 v0.9.3 hvmamd64ami-0ee3fb21fa8460fe1
eu-central-1 v0.9.3 hvmamd64ami-0a9bb10f7ba4005be
eu-central-1 v0.9.3 hvmarm64ami-04b43d959dea94867
eu-north-1 v0.9.3 hvmamd64ami-0aac2b7f6e6e4165f
eu-north-1 v0.9.3 hvmarm64ami-0efa72d5b08842537
eu-south-1 v0.9.3 hvmarm64ami-0a26ca9557581ab4d
eu-south-1 v0.9.3 hvmamd64ami-07a8f297eb1a122bb
eu-west-1 v0.9.3 hvmarm64ami-0cba23676c5b1387d
eu-west-1 v0.9.3 hvmamd64ami-0b48cec54894a4761
eu-west-2 v0.9.3 hvmamd64ami-07905ee641b8c9707
eu-west-2 v0.9.3 hvmarm64ami-0146a0771fcbc363c
eu-west-3 v0.9.3 hvmarm64ami-0b89c25cf3e114987
eu-west-3 v0.9.3 hvmamd64ami-090aba34b01a2a6b8
sa-east-1 v0.9.3 hvmamd64ami-0f97b01b268404c33
sa-east-1 v0.9.3 hvmarm64ami-0bd7760952404d288
us-east-1 v0.9.3 hvmarm64ami-0db57fa9429bc2e10
us-east-1 v0.9.3 hvmamd64ami-06d5ea91e8430e0b2
us-east-2 v0.9.3 hvmarm64ami-050decc665180623c
us-east-2 v0.9.3 hvmamd64ami-091ec45dfbcfee3e0
us-west-1 v0.9.3 hvmarm64ami-034413bd243951981
us-west-1 v0.9.3 hvmamd64ami-09717b6de2278709c
us-west-2 v0.9.3 hvmarm64ami-0d44b1382afc4aec2
us-west-2 v0.9.3 hvmamd64ami-03eb72de1cff1b3a6

Creating a Cluster via the AWS CLI

In this guide we will create an HA Kubernetes cluster with 3 worker nodes. We assume an existing VPC, and some familiarity with AWS. If you need more information on AWS specifics, please see the official AWS documentation.

Create the Subnet

aws ec2 create-subnet \
    --region $REGION \
    --vpc-id $VPC \
    --cidr-block ${CIDR_BLOCK}

Create the AMI

Prepare the Import Prerequisites

Create the S3 Bucket
aws s3api create-bucket \
    --bucket $BUCKET \
    --create-bucket-configuration LocationConstraint=$REGION \
    --acl private
Create the vmimport Role

In order to create an AMI, ensure that the vmimport role exists as described in the official AWS documentation.

Note that the role should be associated with the S3 bucket we created above.

Create the Image Snapshot

First, download the AWS image from a Talos release:

curl -LO | tar -xv

Copy the RAW disk to S3 and import it as a snapshot:

aws s3 cp disk.raw s3://$BUCKET/talos-aws-tutorial.raw
aws ec2 import-snapshot \
    --region $REGION \
    --description "Talos kubernetes tutorial" \
    --disk-container "Format=raw,UserBucket={S3Bucket=$BUCKET,S3Key=talos-aws-tutorial.raw}"

Save the SnapshotId, as we will need it once the import is done. To check on the status of the import, run:

aws ec2 describe-import-snapshot-tasks \
    --region $REGION \

Once the SnapshotTaskDetail.Status indicates completed, we can register the image.

Register the Image
aws ec2 register-image \
    --region $REGION \
    --block-device-mappings "DeviceName=/dev/xvda,VirtualName=talos,Ebs={DeleteOnTermination=true,SnapshotId=$SNAPSHOT,VolumeSize=4,VolumeType=gp2}" \
    --root-device-name /dev/xvda \
    --virtualization-type hvm \
    --architecture x86_64 \
    --ena-support \
    --name talos-aws-tutorial-ami

We now have an AMI we can use to create our cluster. Save the AMI ID, as we will need it when we create EC2 instances.

Create a Security Group

aws ec2 create-security-group \
    --region $REGION \
    --group-name talos-aws-tutorial-sg \
    --description "Security Group for EC2 instances to allow ports required by Talos"

Using the security group ID from above, allow all internal traffic within the same security group:

aws ec2 authorize-security-group-ingress \
    --region $REGION \
    --group-name talos-aws-tutorial-sg \
    --protocol all \
    --port 0 \
    --source-group $SECURITY_GROUP

and expose the Talos and Kubernetes APIs:

aws ec2 authorize-security-group-ingress \
    --region $REGION \
    --group-name talos-aws-tutorial-sg \
    --protocol tcp \
    --port 6443 \

aws ec2 authorize-security-group-ingress \
    --region $REGION \
    --group-name talos-aws-tutorial-sg \
    --protocol tcp \
    --port 50000-50001 \

Create a Load Balancer

aws elbv2 create-load-balancer \
    --region $REGION \
    --name talos-aws-tutorial-lb \
    --type network --subnets $SUBNET

Take note of the DNS name and ARN. We will need these soon.

Create the Machine Configuration Files

Generating Base Configurations

Using the DNS name of the loadbalancer created earlier, generate the base configuration files for the Talos machines:

$ talosctl gen config talos-k8s-aws-tutorial https://<load balancer IP or DNS>:<port>
created init.yaml
created controlplane.yaml
created join.yaml
created talosconfig

Take note that in this version of Talos, the generated configs are too long for AWS userdata field. Comments can be removed to workaround this with a sed command like:

cat init.yaml | sed 's/ #.//' > temp.yaml; mv temp.yaml init.yaml

cat controlplane.yaml | sed 's/ #.//' > temp.yaml; mv temp.yaml controlplane.yaml

At this point, you can modify the generated configs to your liking.

Validate the Configuration Files

$ talosctl validate --config init.yaml --mode cloud
init.yaml is valid for cloud mode
$ talosctl validate --config controlplane.yaml --mode cloud
controlplane.yaml is valid for cloud mode
$ talosctl validate --config join.yaml --mode cloud
join.yaml is valid for cloud mode

Create the EC2 Instances

Note: There is a known issue that prevents Talos from running on T2 instance types. Please use T3 if you need burstable instance types.

Create the Bootstrap Node

aws ec2 run-instances \
    --region $REGION \
    --image-id $AMI \
    --count 1 \
    --instance-type t3.small \
    --user-data file://init.yaml \
    --subnet-id $SUBNET \
    --security-group-ids $SECURITY_GROUP \
    --associate-public-ip-address \
    --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=talos-aws-tutorial-cp-0}]"

Create the Remaining Control Plane Nodes

while [[ "$CP_COUNT" -lt 3 ]]; do
  aws ec2 run-instances \
    --region $REGION \
    --image-id $AMI \
    --count 1 \
    --instance-type t3.small \
    --user-data file://controlplane.yaml \
    --subnet-id $SUBNET \
    --security-group-ids $SECURITY_GROUP \
    --associate-public-ip-address \
    --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=talos-aws-tutorial-cp-$CP_COUNT}]"

Make a note of the resulting PrivateIpAddress from the init and controlplane nodes for later use.

Create the Worker Nodes

aws ec2 run-instances \
    --region $REGION \
    --image-id $AMI \
    --count 3 \
    --instance-type t3.small \
    --user-data file://join.yaml \
    --subnet-id $SUBNET \
    --security-group-ids $SECURITY_GROUP
    --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=talos-aws-tutorial-worker}]"

Configure the Load Balancer

aws elbv2 create-target-group \
    --region $REGION \
    --name talos-aws-tutorial-tg \
    --protocol TCP \
    --port 6443 \
    --target-type ip \
    --vpc-id $VPC

Now, using the target group's ARN, and the PrivateIpAddress from the instances that you created :

aws elbv2 register-targets \
    --region $REGION \
    --target-group-arn $TARGET_GROUP_ARN \
    --targets Id=$CP_NODE_1_IP  Id=$CP_NODE_2_IP  Id=$CP_NODE_3_IP

Using the ARNs of the load balancer and target group from previous steps, create the listener:

aws elbv2 create-listener \
    --region $REGION \
    --load-balancer-arn $LOAD_BALANCER_ARN \
    --protocol TCP \
    --port 443 \
    --default-actions Type=forward,TargetGroupArn=$TARGET_GROUP_ARN

Retrieve the kubeconfig

At this point we can retrieve the admin kubeconfig by running:

talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
talosctl --talosconfig talosconfig kubeconfig .