AWS

Official AMI Images

List of AMI images for each AWS region:
RegionVersionInstance TypeArchitectureAMI
ap-northeast-1 v0.10.0 hvmamd64ami-06d6df9dcf5f0d669
ap-northeast-1 v0.10.0 hvmarm64ami-00132a027f49d1cc1
ap-northeast-1 v0.10.1 hvmamd64ami-013850dcbd2594db9
ap-northeast-2 v0.10.0 hvmarm64ami-090647ef24433c2e0
ap-northeast-2 v0.10.0 hvmamd64ami-097a1bec9cd6f6567
ap-northeast-3 v0.10.0 hvmarm64ami-01036f1476ce62eb3
ap-south-1 v0.10.0 hvmamd64ami-0fd72ee371a13bbf7
ap-south-1 v0.10.0 hvmarm64ami-0701e4ed38401c2bf
ap-northeast-2 v0.10.1 hvmarm64ami-059bff26af6a47318
ap-northeast-2 v0.10.1 hvmamd64ami-03a4d297a85e29e1e
ap-northeast-3 v0.10.0 hvmamd64ami-0df81ce014270cd4c
ap-northeast-3 v0.10.1 hvmarm64ami-0157aa0e45fdaf6aa
ap-northeast-3 v0.10.1 hvmamd64ami-0a805fac67739fbe5
ap-south-1 v0.10.1 hvmarm64ami-045f0981b8197c451
ap-south-1 v0.10.1 hvmamd64ami-098d70c36bb035488
ap-southeast-1 v0.10.0 hvmamd64ami-0a6cd039c3941bdb8
ap-southeast-1 v0.10.0 hvmarm64ami-03ebdaddb3c9d1460
ap-southeast-1 v0.10.1 hvmamd64ami-05fc1296838a9781e
ap-southeast-1 v0.10.1 hvmarm64ami-0ad5793e9044b3b91
ap-southeast-2 v0.10.0 hvmamd64ami-0371f9004f42a03af
ap-southeast-2 v0.10.0 hvmarm64ami-06ca96eb7bf1dd91d
ap-southeast-2 v0.10.1 hvmamd64ami-0a6d4a0d49a0861ba
ap-southeast-2 v0.10.1 hvmarm64ami-071f9f100f9c90708
ca-central-1 v0.10.0 hvmarm64ami-088bcd36fd11135bf
ca-central-1 v0.10.0 hvmamd64ami-0b35d60fd213b1517
eu-central-1 v0.10.0 hvmarm64ami-09f443bd6c7f47931
eu-north-1 v0.10.0 hvmamd64ami-0f75e6aca65327989
eu-north-1 v0.10.0 hvmarm64ami-08d48f977dc627d39
ca-central-1 v0.10.1 hvmarm64ami-032da86aedb71d404
ca-central-1 v0.10.1 hvmamd64ami-02963cf1232776989
eu-central-1 v0.10.0 hvmamd64ami-066a384c5304fa8e6
eu-central-1 v0.10.1 hvmamd64ami-0bd578b4cb42e1c6d
eu-south-1 v0.10.0 hvmarm64ami-0c22b7eeaf03e7fc0
eu-south-1 v0.10.0 hvmamd64ami-0d05d3f410c6de0f9
eu-west-1 v0.10.0 hvmamd64ami-007a78d59b1d8d25b
eu-west-1 v0.10.0 hvmarm64ami-0d07111b58fe65a89
eu-west-2 v0.10.0 hvmamd64ami-066b1f0e9c87d63f1
eu-west-3 v0.10.0 hvmamd64ami-077c6863dc04e93fb
eu-west-3 v0.10.0 hvmarm64ami-0eb0357929d13c528
sa-east-1 v0.10.0 hvmarm64ami-05967c7159c449886
sa-east-1 v0.10.0 hvmamd64ami-01c9b31517f2abcd0
us-east-1 v0.10.0 hvmarm64ami-00ebac115865ffdd1
us-east-1 v0.10.0 hvmamd64ami-0131e4eccb3608841
eu-central-1 v0.10.1 hvmarm64ami-02911fca9c76aa785
eu-north-1 v0.10.1 hvmamd64ami-0b33a516401e5180a
eu-north-1 v0.10.1 hvmarm64ami-0a28db23f8285de23
eu-south-1 v0.10.1 hvmamd64ami-095510271e25ff150
eu-south-1 v0.10.1 hvmarm64ami-010ffeb0111f52409
eu-west-1 v0.10.1 hvmarm64ami-00b9381caae786e35
eu-west-1 v0.10.1 hvmamd64ami-0055b9fe4472a6e0c
eu-west-2 v0.10.0 hvmarm64ami-0191b424273ce86cc
eu-west-2 v0.10.1 hvmamd64ami-060b08df37f828012
eu-west-2 v0.10.1 hvmarm64ami-0d41d2d284581a501
us-east-2 v0.10.0 hvmamd64ami-076d5f97eb212a329
us-east-2 v0.10.0 hvmarm64ami-06e6948ca622c919d
us-west-1 v0.10.0 hvmarm64ami-0518f84ccd7795cf6
us-west-1 v0.10.0 hvmamd64ami-00a242e8fb50c5a18
us-west-2 v0.10.0 hvmamd64ami-0388f1315de92dec0
us-west-2 v0.10.0 hvmarm64ami-05639d0e7822a103c
ap-northeast-1 v0.10.1 hvmarm64ami-06e08b057f9be0569
eu-west-3 v0.10.1 hvmarm64ami-0fc4fbab6de7c0153
eu-west-3 v0.10.1 hvmamd64ami-0f437d5c9024041f5
sa-east-1 v0.10.1 hvmarm64ami-06c94f77b8217615a
sa-east-1 v0.10.1 hvmamd64ami-0f9d5602c604f91e6
us-east-1 v0.10.1 hvmarm64ami-0383a36fe460c83e0
us-east-1 v0.10.1 hvmamd64ami-0e6048476edccc728
us-east-2 v0.10.1 hvmamd64ami-0eac99fd6f1c96724
us-east-2 v0.10.1 hvmarm64ami-057df12b5a196fc80
us-west-1 v0.10.1 hvmarm64ami-0cfc41b75ba707220
us-west-1 v0.10.1 hvmamd64ami-046618bd7d7b8c452
us-west-2 v0.10.1 hvmarm64ami-0efad16c7f62f1453
us-west-2 v0.10.1 hvmamd64ami-035e1ca9a4650f5ac

Creating a Cluster via the AWS CLI

In this guide we will create an HA Kubernetes cluster with 3 worker nodes. We assume an existing VPC, and some familiarity with AWS. If you need more information on AWS specifics, please see the official AWS documentation.

Create the Subnet

aws ec2 create-subnet \
    --region $REGION \
    --vpc-id $VPC \
    --cidr-block ${CIDR_BLOCK}

Create the AMI

Prepare the Import Prerequisites

Create the S3 Bucket
aws s3api create-bucket \
    --bucket $BUCKET \
    --create-bucket-configuration LocationConstraint=$REGION \
    --acl private
Create the vmimport Role

In order to create an AMI, ensure that the vmimport role exists as described in the official AWS documentation.

Note that the role should be associated with the S3 bucket we created above.

Create the Image Snapshot

First, download the AWS image from a Talos release:

curl -LO https://github.com/talos-systems/talos/releases/latest/download/aws-amd64.tar.gz | tar -xv

Copy the RAW disk to S3 and import it as a snapshot:

aws s3 cp disk.raw s3://$BUCKET/talos-aws-tutorial.raw
aws ec2 import-snapshot \
    --region $REGION \
    --description "Talos kubernetes tutorial" \
    --disk-container "Format=raw,UserBucket={S3Bucket=$BUCKET,S3Key=talos-aws-tutorial.raw}"

Save the SnapshotId, as we will need it once the import is done. To check on the status of the import, run:

aws ec2 describe-import-snapshot-tasks \
    --region $REGION \
    --import-task-ids

Once the SnapshotTaskDetail.Status indicates completed, we can register the image.

Register the Image
aws ec2 register-image \
    --region $REGION \
    --block-device-mappings "DeviceName=/dev/xvda,VirtualName=talos,Ebs={DeleteOnTermination=true,SnapshotId=$SNAPSHOT,VolumeSize=4,VolumeType=gp2}" \
    --root-device-name /dev/xvda \
    --virtualization-type hvm \
    --architecture x86_64 \
    --ena-support \
    --name talos-aws-tutorial-ami

We now have an AMI we can use to create our cluster. Save the AMI ID, as we will need it when we create EC2 instances.

Create a Security Group

aws ec2 create-security-group \
    --region $REGION \
    --group-name talos-aws-tutorial-sg \
    --description "Security Group for EC2 instances to allow ports required by Talos"

Using the security group ID from above, allow all internal traffic within the same security group:

aws ec2 authorize-security-group-ingress \
    --region $REGION \
    --group-name talos-aws-tutorial-sg \
    --protocol all \
    --port 0 \
    --source-group $SECURITY_GROUP

and expose the Talos and Kubernetes APIs:

aws ec2 authorize-security-group-ingress \
    --region $REGION \
    --group-name talos-aws-tutorial-sg \
    --protocol tcp \
    --port 6443 \
    --cidr 0.0.0.0/0

aws ec2 authorize-security-group-ingress \
    --region $REGION \
    --group-name talos-aws-tutorial-sg \
    --protocol tcp \
    --port 50000-50001 \
    --cidr 0.0.0.0/0

Create a Load Balancer

aws elbv2 create-load-balancer \
    --region $REGION \
    --name talos-aws-tutorial-lb \
    --type network --subnets $SUBNET

Take note of the DNS name and ARN. We will need these soon.

Create the Machine Configuration Files

Generating Base Configurations

Using the DNS name of the loadbalancer created earlier, generate the base configuration files for the Talos machines:

$ talosctl gen config talos-k8s-aws-tutorial https://<load balancer IP or DNS>:<port> --with-examples=false --with-docs=false
created init.yaml
created controlplane.yaml
created join.yaml
created talosconfig

Take note that the generated configs are too long for AWS userdata field if the --with-examples and --with-docs flags are not passed.

At this point, you can modify the generated configs to your liking.

Optionally, you can specify --config-patch with RFC6902 jsonpatch which will be applied during the config generation.

Validate the Configuration Files

$ talosctl validate --config init.yaml --mode cloud
init.yaml is valid for cloud mode
$ talosctl validate --config controlplane.yaml --mode cloud
controlplane.yaml is valid for cloud mode
$ talosctl validate --config join.yaml --mode cloud
join.yaml is valid for cloud mode

Create the EC2 Instances

Note: There is a known issue that prevents Talos from running on T2 instance types. Please use T3 if you need burstable instance types.

Create the Bootstrap Node

aws ec2 run-instances \
    --region $REGION \
    --image-id $AMI \
    --count 1 \
    --instance-type t3.small \
    --user-data file://init.yaml \
    --subnet-id $SUBNET \
    --security-group-ids $SECURITY_GROUP \
    --associate-public-ip-address \
    --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=talos-aws-tutorial-cp-0}]"

Create the Remaining Control Plane Nodes

CP_COUNT=1
while [[ "$CP_COUNT" -lt 3 ]]; do
  aws ec2 run-instances \
    --region $REGION \
    --image-id $AMI \
    --count 1 \
    --instance-type t3.small \
    --user-data file://controlplane.yaml \
    --subnet-id $SUBNET \
    --security-group-ids $SECURITY_GROUP \
    --associate-public-ip-address \
    --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=talos-aws-tutorial-cp-$CP_COUNT}]"
  ((CP_COUNT++))
done

Make a note of the resulting PrivateIpAddress from the init and controlplane nodes for later use.

Create the Worker Nodes

aws ec2 run-instances \
    --region $REGION \
    --image-id $AMI \
    --count 3 \
    --instance-type t3.small \
    --user-data file://join.yaml \
    --subnet-id $SUBNET \
    --security-group-ids $SECURITY_GROUP
    --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=talos-aws-tutorial-worker}]"

Configure the Load Balancer

aws elbv2 create-target-group \
    --region $REGION \
    --name talos-aws-tutorial-tg \
    --protocol TCP \
    --port 6443 \
    --target-type ip \
    --vpc-id $VPC

Now, using the target group's ARN, and the PrivateIpAddress from the instances that you created :

aws elbv2 register-targets \
    --region $REGION \
    --target-group-arn $TARGET_GROUP_ARN \
    --targets Id=$CP_NODE_1_IP  Id=$CP_NODE_2_IP  Id=$CP_NODE_3_IP

Using the ARNs of the load balancer and target group from previous steps, create the listener:

aws elbv2 create-listener \
    --region $REGION \
    --load-balancer-arn $LOAD_BALANCER_ARN \
    --protocol TCP \
    --port 443 \
    --default-actions Type=forward,TargetGroupArn=$TARGET_GROUP_ARN

Retrieve the kubeconfig

At this point we can retrieve the admin kubeconfig by running:

talosctl --talosconfig talosconfig config endpoint <control plane 1 IP>
talosctl --talosconfig talosconfig config node <control plane 1 IP>
talosctl --talosconfig talosconfig kubeconfig .