The final module of the Cluster Architecture, Installation, and Configuration is Implement etcd backup and restore. Let’s quickly perform the actions we need to complete this step for the exam.
Perform a Backup of etcd
While it’s still early and details of the CKA v1.19 environment aren’t known yet, I’m anticipating a small change to how etcd backup and restore is performed. If you’ve been preparing for the CKA before the September 2020 change to Kubernetes v1.19, you may know be familiar with the environment variable
export ETCDCTL_API=3 to ensure you’re using version 3 of etcd’s API, which has the backup and restore capability. However, Kubernetes v1.19 ships with etcd 3.4.9 and in etcd 3.4.x, the default API version is 3 so this process is no longer necessary! If
etcdctl version returns a version lower than 3.4.x, you will still need to set the API version to 3 for performing backup and restore operations.
Get The Info You Need First
When you type the etcd backup command, you’re going to need to specify the location of a few certificates and a key. Let’s grab that really quick!
kubectl describe pod etcd-master -n kube-system
The output that we’re interested in is under the Command section. You will need to copy the locations of:
Command: etcd --advertise-client-urls=https://172.17.0.54:2379 --cert-file=/etc/kubernetes/pki/etcd/server.crt --client-cert-auth=true --data-dir=/var/lib/etcd --initial-advertise-peer-urls=https://172.17.0.54:2380 --initial-cluster=master=https://172.17.0.54:2380 --key-file=/etc/kubernetes/pki/etcd/server.key --listen-client-urls=https://127.0.0.1:2379,https://172.17.0.54:2379 --listen-metrics-urls=http://127.0.0.1:2381 --listen-peer-urls=https://172.17.0.54:2380 --name=master --peer-cert-file=/etc/kubernetes/pki/etcd/peer.crt --peer-client-cert-auth=true --peer-key-file=/etc/kubernetes/pki/etcd/peer.key --peer-trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt --snapshot-count=10000 --trusted-ca-file=/etc/kubernetes/pki/etcd/ca.crt
Now here’s the fun part: the names of the options that etcd uses isn’t the same that etcdctl uses for the backup. They’re close enough to match up. Here’s how they map:
|etcd options||etcdctl options|
Your backup command should look like this:
etcdctl snapshot save etcd.db --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --endpoints=https://127.0.0.1:2379 \ --key=/etc/kubernetes/pki/etcd/server.key Snapshot saved at etcd.db
That’s it! The etcd database is backed up and we’re ready to restore!
Perform a Restore of etcd
etcdctl snapshot restore etcd.db \ --endpoints=https://127.0.0.1:2379 \ --cacert=/etc/kubernetes/pki/etcd/ca.crt \ --name=master \ --cert=/etc/kubernetes/pki/etcd/server.crt \ --key=/etc/kubernetes/pki/etcd/server.key \ --data-dir /var/lib/etcd-from-backup \ --initial-cluster=master=https://127.0.0.1:2380 \ --initial-cluster-token=etcd-cluster-1 \ --initial-advertise-peer-urls=https://127.0.0.1:2380 \
What’s going on here? When restoring etcd from a backup, we’re effectively setting up a new etcd cluster. Find a way to link to this doc So we tell etcd:
- This is the endpoint IP:Port that I want etcd to use for client communications
- Where to find the cluster’s certificates and keys
- Restore the etcd cluster snapshot to the /var/lib/etcd-from-backup directory
- The IP:Port for server-to-server communication
- Re-initialize the etcd cluster token since we are creating a new cluster