I had my kubernetes certificate expire as I was publishing the last blog post, and was able to resolve it by following these steps on Fedora with kubeadm
:
-
Confirm they are expired by running
; kubeadm certs check-expiration
-
Update the certificates manually by shelling into a control plane node and running:
; kubeadm certs renew all
-
Now, upgrade to the next version of
kubeadm
you can update to. Find your version with:; kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"26", GitVersion:"v1.26.9", GitCommit:"d1483fdf7a0578c83523bc1e2212a606a44fd71d", GitTreeState:"clean", BuildDate:"2023-09-13T11:31:28Z", GoVersion:"go1.20.8", Compiler:"gc", Platform:"linux/amd64"}
Then find the last patch version of the next major release:
; yum list available --disablerepo='*' --enablerepo=kubernetes --showduplicates --disableexcludes=kubernetes
If you see something like:
Errors during downloading metadata for repository 'kubernetes': - Status code: 404 for https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64/repodata/repomd.xml (IP: 2607:f8b0:4024:c02::8b) Error: Failed to download metadata for repo 'kubernetes': Cannot download repomd.xml: Cannot download repodata/repomd.xml: All mirrors were tried Ignoring repositories: kubernetes
then update
/etc/yum.repos.d/kubernetes.repo
(see k8s package management), replacing the version with the version of the next major release:# This overwrites any existing configuration in /etc/yum.repos.d/kubernetes.repo cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/ enabled=1 gpgcheck=1 gpgkey=https://pkgs.k8s.io/core:/stable:/v1.27/rpm/repodata/repomd.xml.key EOF
After which, you must update this file on each machine for each major version upgrade.
-
Install that version:
; sudo yum install -y kubeadm-'1.28.15-*' --disableexcludes=kubernetes
-
Plan an upgrade:
; sudo kubeadm upgrade plan
-
Upgrade kubelet:
; sudo yum install -y kubelet-'1.28.15-*' kubectl-'1.28.15-*' --disableexcludes=kubernetes
-
On each worker node, install the same version of
kubeadm
, after updating/etc/yum.repos.d/kubernetes.repo
as above:; sudo yum install -y kubeadm-'1.28.15-*' --disableexcludes=kubernetes
-
Upgrade the node:
; sudo kubeadm upgrade node
-
Upgrade kubelet:
; sudo yum install -y kubelet-'1.28.15-*' kubectl-'1.28.15-*' --disableexcludes=kubernetes
-
On a control plane node, apply the version you installed earlier:
; sudo kubeadm upgrade apply v1.28.15
-
On all nodes, restart
kubelet
:; sudo systemctl restart kubelet.service
-
On a control plane node, copy the
admin.conf
to your user's config:; sudo cp /etc/kubernetes/admin.conf ~/.kube/config
-
Copy the new kube config to your machine for access:
; mv ~/.kube/config ~/.kube/config.bak ; rsync k1.home.arpa:~/.kube/config ~/.kube/config
I also had to run
; sudo dnf remove zram-generator-defaults
; sudo swapoff -a
to permanently disable swap, which was causing kubelet
to fail.
Scripts
You could run a script like this on the control plane:
#!/usr/bin/env bash
set -euxo pipefail
if ! command -v jq 2>&1 >/dev/null
then
sudo dnf install --assumeyes --quiet jq
fi
case "$1" in
plan)
version=$(
kubeadm version --output json \
| jq --raw-output '"\(.clientVersion.major).\(.clientVersion.minor | tonumber + 1)"'
)
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v${version}/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v${version}/rpm/repodata/repomd.xml.key
EOF
pkg_version=$(
dnf list available --disablerepo='*' --enablerepo=kubernetes --showduplicates --disableexcludes=kubernetes \
| awk '{ print $2 }' \
| grep "${version}" \
| sort -V \
| uniq \
| tail -n 1
)
sudo dnf install --assumeyes --quiet "kubeadm-${pkg_version}" --disableexcludes=kubernetes
sudo kubeadm upgrade plan
sudo yum install --assumeyes --quiet "kubelet-${pkg_version}" "kubectl-${pkg_version}" --disableexcludes=kubernetes
;;
apply)
# kubeadm has already been upgraded
version=$(
kubeadm version --output json \
| jq --raw-output '.clientVersion.gitVersion'
)
sudo kubeadm upgrade apply "${version}"
sudo systemctl restart kubelet.service
;;
*)
echo "Unkown action $1" >&2
exit 1
;;
esac
and this on each node:
#!/usr/bin/env bash
set -euxo pipefail
if ! command -v jq 2>&1 >/dev/null
then
sudo dnf install --assumeyes --quiet jq
fi
case "$1" in
apply)
version=$(
kubeadm version --output json \
| jq --raw-output '"\(.clientVersion.major).\(.clientVersion.minor | tonumber + 1)"'
)
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v${version}/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v${version}/rpm/repodata/repomd.xml.key
EOF
pkg_version=$(
dnf list available --disablerepo='*' --enablerepo=kubernetes --showduplicates --disableexcludes=kubernetes \
| awk '{ print $2 }' \
| grep "${version}" \
| sort -V \
| uniq \
| tail -n 1
)
sudo dnf install --assumeyes --quiet "kubeadm-${pkg_version}" --disableexcludes=kubernetes
sudo kubeadm upgrade node
sudo yum install --assumeyes --quiet "kubelet-${pkg_version}" "kubectl-${pkg_version}" --disableexcludes=kubernetes
;;
restart)
sudo systemctl restart kubelet.service
;;
*)
echo "Unkown action $1" >&2
exit 1
;;
esac
Execute
./upgrade-k8s.sh plan
on the control node(s) first,- then
./upgrade-k8s.sh apply
on the worker(s), - then
./upgrade-k8s.sh apply
on the control plane node(s), - finally
./upgrade-k8s.sh restart
on the worker(s).
Rollbacks
When upgrading from 1.30 to 1.31, I experienced an issue where all pods began to crash, and kubelet
errored with:
Error: services have not yet been read at least once, cannot construct envvars
This was because in a previous iteration of these instructions, kubelet
restarted before the cluster components were upgraded.
To revert, on all machines run:
; cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/
enabled=1
gpgcheck=1
gpgkey=https://pkgs.k8s.io/core:/stable:/v1.30/rpm/repodata/repomd.xml.key
EOF
with the last major version, then find the right minor version with:
; yum list available --disablerepo='*' --enablerepo=kubernetes --showduplicates --disableexcludes=kubernetes
with that, run the following on all nodes:
sudo dnf install --assumeyes --quiet "kubeadm-1.30.8-*" --disableexcludes=kubernetes
sudo yum install --assumeyes --quiet "kubelet-1.30.8-*" "kubectl-1.30.8-*" --disableexcludes=kubernetes
sudo systemctl restart kubelet.service
After downgrading to 1.30.8, all pods stopped crashing.