Migrating from Kubeadm
It is possible to migrate Talos from a cluster that is created using kubeadm to Talos.
High-level steps are the following:
- Collect CA certificates and a bootstrap token from a control plane node.
- Create a Talos machine config with the CA certificates with the ones you collected.
- Update control plane endpoint in the machine config to point to the existing control plane (i.e. your load balancer address).
- Boot a new Talos machine and apply the machine config.
- Verify that the new control plane node is ready.
- Remove one of the old control plane nodes.
- Repeat the same steps for all control plane nodes.
- Verify that all control plane nodes are ready.
- Repeat the same steps for all worker nodes, using the machine config generated for the workers.
Remarks on kube-apiserver load balancer
While migrating to Talos, you need to make sure that your kube-apiserver load balancer is in place and keeps pointing to the correct set of control plane nodes.
This process depends on your load balancer setup.
If you are using an LB that is external to the control plane nodes (e.g. cloud provider LB, F5 BIG-IP, etc.), you need to make sure that you update the backend IPs of the load balancer to point to the control plane nodes as you add Talos nodes and remove kubeadm-based ones.
If your load balancing is done on the control plane nodes (e.g. keepalived + haproxy on the control plane nodes), you can do the following:
- Add Talos nodes and remove kubeadm-based ones while updating the haproxy backends to point to the newly added nodes except the last kubeadm-based control plane node.
- Turn off keepalived to drop the virtual IP used by the kubeadm-based nodes (introduces kube-apiserver downtime).
- Set up a virtual-IP based new load balancer on the new set of Talos control plane nodes. Use the previous LB IP as the LB virtual IP.
- Verify apiserver connectivity over the Talos-managed virtual IP.
- Migrate the last control-plane node.
- Admin access to the kubeadm-based cluster
- Access to the
/etc/kubernetes/pkidirectory (e.g. SSH & root permissions) on the control plane nodes of the kubeadm-based cluster
- Access to kube-apiserver load-balancer configuration
/etc/kubernetes/pkidirectory from a control plane node of the kubeadm-based cluster.
Create a new join token for the new control plane nodes:
# inside a control plane node kubeadm token create
Create Talos secrets from the PKI directory you downloaded on step 1 and the token you generated on step 2:
talosctl gen secrets --kubernetes-bootstrap-token <TOKEN> --from-kubernetes-pki <PKI_DIR>
Create a new Talos config from the secrets:
talosctl gen config --with-secrets secrets.yaml <CLUSTER_NAME> https://<EXISTING_CLUSTER_LB_IP>
Collect the information about the kubeadm-based cluster from the kubeadm configmap:
kubectl get configmap -n kube-system kubeadm-config -oyaml
Take note of the following information in the
Replace the following information in the generated
.cluster.network.podSubnetswith the value of the
networking.podSubnetfrom the previous step
.cluster.network.serviceSubnetswith the value of the
networking.serviceSubnetfrom the previous step
.cluster.network.dnsDomainwith the value of the
networking.dnsDomainfrom the previous step
Go through the rest of
worker.yamlto customize them according to your needs.
Bring up a Talos node to be the initial Talos control plane node.
Apply the generated
controlplane.yamlto the Talos control plane node:
talosctl --nodes <TALOS_NODE_IP> apply-config --insecure --file controlplane.yaml
Wait until the new control plane node joins the cluster and is ready.
kubectl get node -owide --watch
Update your load balancer to point to the new control plane node.
Drain the old control plane node you are replacing:
kubectl drain <OLD_NODE> --delete-emptydir-data --force --ignore-daemonsets --timeout=10m
Remove the old control plane node from the cluster:
kubectl delete node <OLD_NODE>
Destroy the old node:
# inside the node sudo kubeadm reset --force
Repeat the same steps, starting from step 7, for all control plane nodes.
Repeat the same steps, starting from step 7, for all worker nodes while applying the
worker.yamlinstead and skipping the LB step:
talosctl --nodes <TALOS_NODE_IP> apply-config --insecure --file worker.yaml