Enroll Worker Nodes

Register Worker Nodes

Enrolling a worker node to our Openstack requires the following steps:

  1. Installing k3s on the worker node

  2. Importing the worker node

Installing k3s on the worker node

On the controller, get mynodetoken by

$ sudo cat /var/lib/rancher/k3s/server/node-token

On the controller, get server by reading the EXTERNAL-IP of the node

$ kubectl get nodes -o wide
NAME       STATUS   ROLES                  AGE    VERSION        INTERNAL-IP   EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
edge-vm1   Ready    control-plane,master   148m   v1.22.5+k3s1   10.10.2.31    10.0.87.20    Ubuntu 20.04.5 LTS   5.4.0-128-generic   containerd://1.5.8-k3s1

Here you can choose between 2 different installations:


1. Simple K3S installation

On the worker node run

$ curl -sfL https://get.k3s.io | K3S_URL=https://server:6443 K3S_TOKEN=mynodetoken sh -
[INFO]  Finding release for channel stable
[INFO]  Using v1.24.6+k3s1 as release
[INFO]  Downloading hash https://github.com/k3s-io/k3s/releases/download/v1.24.6+k3s1/sha256sum-amd64.txt
[INFO]  Downloading binary https://github.com/k3s-io/k3s/releases/download/v1.24.6+k3s1/k3s
[INFO]  Verifying binary download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Skipping installation of SELinux RPM
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  Creating /usr/local/bin/ctr symlink to k3s
[INFO]  Creating killall script /usr/local/bin/k3s-killall.sh
[INFO]  Creating uninstall script /usr/local/bin/k3s-agent-uninstall.sh
[INFO]  env: Creating environment file /etc/systemd/system/k3s-agent.service.env
[INFO]  systemd: Creating service file /etc/systemd/system/k3s-agent.service
[INFO]  systemd: Enabling k3s-agent unit
Created symlink /etc/systemd/system/multi-user.target.wants/k3s-agent.service → /etc/systemd/system/k3s-agent.service.
[INFO]  systemd: Starting k3s-agent

2. K3S installation with a static cpu manager policy

If you like to enroll a worker node with a static cpu manager policy, run the following on the node

Check if cpu manager state is set, run on the worker node


On the controller, check if the worker node is added

check if all kube pods are running healthy

run on the controller, check the node

On the worker node, install the cni plugins (needs GO)

On the worker node, check all the interfaces are up, setup with the proper MTU, and connected according to the testbeds inventory.

Importing the worker node

Here we add the node to Openstack. First there must be a json file containing the worker node information. The format must be as below.

Then we import the json file using openstack command:

Note that the name and mtu of the interfaces must match with the worker node.

Check if it is added to Doni's database and check its status. Workers' state must be STEADY. Otherwise, check the services' logs.

Doni adds the node to Blazar for user reservation and creates network attachment definition on k8s for the baremetal ports. Check their existence.

Remove and clean a worker node

  • Make sure the node has no reservations. Delete if there is any.

  • Delete the hardware in Doni

  • On the worker node, run the following to stop and remove k3s services.

  • Check the nodes registered on the controller, delete the desired node

Attach networks to the containers

This feature is only enable for physical VLAN networks registered in Neutron.

When creating a container, to attach any network to the network interfaces, make sure to provide the proper labels in the following format:

If you don't specify any ip for an interface, the container will get ip from the dhcp agent. Therefore, the network must be dhcp enabled. For example, the following settings corresponds to attaching 2 networks where for the first one we ask for a static ip and the second one will get an ip from the dhcp agent. Routes option can be used multiple times, resulting in multiple route commands.

Note that the first container that is taking a baremetal interface will decide about the network that it is going to get attached to. The next containers can use the same baremetal interface but they must be in the same network and subnet as the first container. For example:

Conainer 1 on worker-1:

Conainer 2 on worker-1 (correct):

Conainer 2 on worker-1 (wrong):

Routing example

Create containers with privileges

When creating a container, to enable any capabalities, make sure to provide the proper labels in the following format:

For more information about kubernetes security context check here.

Control CPU and memory of the container

Check from within the container

For more information about kubernetes controlling node's resources check here.

More troubleshooting

User can request access to the local bridge. Check https://devopstales.github.io/kubernetes/multus/ the bridge CNI. If the option is checked, Multus will add an interface that is connected to a bridge on the container worker node. It means all containers running on this node could be connected to each other using this bridge. In the json, we specify the ip address management (ipam) of this interface. It is better to have different subnets for different container workers.

For the dhcp IPAM to work do the following https://www.cni.dev/plugins/current/ipam/dhcp/ https://superuser.com/questions/1727321/podman-macvlan-network-error

Last updated