Issues with Raspberry Pi
Here, we describe issues with using Raspberry Pi together with the testbed
"Multus" issue when adding RPi as worker node (fixed)
Problem: Multus pod couldn’t see any CNI plugins because nothing was mounted at /opt/cni/bin, and its CNI_PATH variable was unset.
Solution: Mounted the host’s /opt/cni/bin directory into the Multus container and set CNI_PATH=/opt/cni/bin via a DaemonSet patch. Once Multus could execute /opt/cni/bin/calico and calico-ipam, the kubelet could create the network sandbox, so Zun/Kubernetes pods on the Raspberry Pi worker started normally.
Below is the exact sequence we ran—copy-ready commands and the reason for each— that made Multus see the Calico CNI plugins and let containers start on the Raspberry Pi (ARM) worker. Note: "worker-21" is the worker for our case.
Spot the error
kubectl describe pod <pending-pod>
Result: failed to find plugin "calico" in path [/opt/cni/bin]
Verify CNI binaries exist on the node
ssh worker-21 ls -l /opt/cni/bin | grep calico
Result: CNI binaries exist
Find the Multus pod on that node
MULTUS=$(kubectl -n kube-system get pod -l app=multus -o wide \ --field-selector spec.nodeName=worker-21 -o jsonpath='{.items[0].metadata.name}')
Result: Captured pod name (kube-multus-ds-xxxx).
Look inside the Multus container
kubectl -n kube-system exec "$MULTUS" -- sh -c 'echo CNI_PATH=$CNI_PATH; ls /opt/cni/bin 2>&1'
Result: CNI_PATH= (unset) and ls: /opt/cni/bin: No such file or directory → Multus had no mount and no env var.
Patch the DaemonSet to add mount & env
kubectl -n kube-system patch daemonset kube-multus-ds --type=json -p '[
{
"op": "add",
"path": "/spec/template/spec/containers/0/volumeMounts/-",
"value": {
"name": "cnibin",
"mountPath": "/opt/cni/bin",
"readOnly": true
}
}
]'
kubectl -n kube-system patch daemonset kube-multus-ds --type=json -p '[
{
"op": "add",
"path": "/spec/template/spec/volumes/-",
"value": {
"name": "cnibin",
"hostPath": {
"path": "/opt/cni/bin",
"type": "DirectoryOrCreate"
}
}
}
]'
Result: Env: tells Multus where to look. VolumeMount + Volume: exposes host /opt/cni/bin inside the pod (read-only).
Watch rollout
kubectl -n kube-system rollout status ds kube-multus-ds
Result: Waited until new Multus pods were Running.
Re-check inside new Multus pod
MULTUS=$(kubectl -n kube-system get pod -l app=multus \
--field-selector spec.nodeName=worker-21 \
-o jsonpath='{.items[0].metadata.name}')
kubectl -n kube-system exec "$MULTUS" -- ls -1 /opt/cni/bin | head
Result: You should see the Calico (and related) CNI plugin binaries listed, for example:
calico
calico-ipam
bandwidth
portmap
...
Watch the previously stuck container
kubectl -n <project-ns> get pod <zun-pod> -w
Result: Phase changed ContainerCreating → Running within ~1 minute.
Last updated