Kubernetes on a Raspberry Pi Cluster
By Kesi Soundararajan
I. Introduction
Kubernetes is an open source, container orchestration tool that automates the management of containerized applications. Despite its recent 2015 release, Kubernetes has quickly become an industry standard for developers who use containers in their production. To understand more about how Kubernetes works, for this VMWare{code} lab we will be using a Kubernetes cluster created with Raspberry Pi’s (Pi 3, Model B+) to deploy a simple Python app that pulls sensor data from various IoT sensors and displays it on a cheap OLED display. VMWare{code} is VMware’s developer community, with sample exchanges, meetups, code talks, and labs like this one! You can check out more VMware{code} content at https://code.vmware.com/home.
II. Setting up Kubernetes
Kasper Nissen has a great guide that walks you through installing and initializing Kubernetes on your Raspberry Pi’s (https://kubecloud.io/setting-up-a-kubernetes-1-11-raspberry-pi-cluster-using-kubeadm-952bbda329c8). The steps laid out here are taken mostly from Kasper’s guide with a few minor tweaks.
Start by flashing the latest copy of Raspbian from https://www.raspberrypi.org/downloads/raspbian/ to each of your Pi’s SD cards (Etcher works well to do this). Since we do not want to be plugging and unplugging HDMI cables to access our Pi’s, we need to enable SSH by running $ touch /Volumes/boot/ssh after flashing Raspbian to the SD cards. This can also be accomplished by creating a blank file named “ssh” in the SD card’s main directory.
To SSH into the Pi’s from your computer, run
1 |
$ ssh pi@raspberrypi.local |
The Pi’s will need new hostnames as well as static IP’s before we continue. Kasper’s guide includes a script for doing so which can be installed with the following commands:
1 |
$ nano hostname_and_ip.sh |
And copying in
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 |
#!/bin/sh hostname=$1 ip=$2 # should be of format: 192.168.1.100 dns=$3 # should be of format: 192.168.1.1 # Change the hostname sudo hostnamectl --transient set-hostname $hostname sudo hostnamectl --static set-hostname $hostname sudo hostnamectl --pretty set-hostname $hostname sudo sed -i s/raspberrypi/$hostname/g /etc/hosts # Set the static ip sudo cat <<EOT >> /etc/dhcpcd.conf interface eth0 static ip_address=$ip/24 static routers=$dns static domain_name_servers=$dns EOT |
(This is all taken from Kasper’s original blog, the link to this script can be found at https://gist.github.com/kaspernissen/473806621f76c81abd07cd801b686cfa#file-hostname_and_ip-sh)
To run the script:
1 |
$ sh hostname_and_ip.sh [new hostname] [new static IP address] [Router IP address] |
We are now ready to install Docker and Kubernetes on our Pi’s. Once again, Kasper’s guide provides us with a script for this. However, before using the script, you should check that your certificates are updated by running the command below.
1 |
$ sudo apt-get install --reinstall ca-certificates |
Like before, to create the script we use
1 |
$ nano install.sh |
And copy and paste in:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 |
#!/bin/sh # Install Docker curl -sSL get.docker.com | sh && \ sudo usermod pi -aG docker # Disable Swap sudo dphys-swapfile swapoff && \ sudo dphys-swapfile uninstall && \ sudo update-rc.d dphys-swapfile remove echo Adding " cgroup_enable=cpuset cgroup_enable=memory" to /boot/cmdline.txt sudo cp /boot/cmdline.txt /boot/cmdline_backup.txt orig="$(head -n1 /boot/cmdline.txt) cgroup_enable=cpuset cgroup_enable=memory" echo $orig | sudo tee /boot/cmdline.txt # Add repo list and install kubeadm curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add - && \ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list && \ sudo apt-get update -q && \ sudo apt-get install -qy kubeadm |
(https://gist.github.com/kaspernissen/1359aa67395302c6eb064228caa52d1d#file-install-sh)
To run the script:
1 |
$ sh install.sh |
Repeat this process for all your Pi’s until they each have their own unique static IP address and hostname with Docker and Kubernetes installed.
With all the prerequisites in place, we can use our master node to initialize a Kubernetes cluster.
1 |
$ sudo kubeadm init |
After initializing the cluster, there will be a token created for worker nodes to join the cluster like below:
Start by running
1 2 3 4 5 |
$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config |
and then run the join command on each of your worker nodes. Keep in mind that if you are attempting to join a new worker node more than 24 hours after your cluster has been initialized, a new join token will be needed. New join tokens can be generated with:
1 |
$ kubeadm token create |
This will spit out a new token that you can replace in your join command. The discovery token ca-cert hash stays constant, so it’s worth noting it down somewhere in case you choose to add more Pi’s to your cluster down the line.
To check that all your nodes have joined the cluster successfully, run
1 |
$ kubectl get nodes |
The names of all your nodes should appear, however they will all be in the “Not Ready” state until a container network like Weave is installed. If all your nodes have joined the cluster, it’s time to install weave with
1 |
$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')" |
Your nodes will not be in the “Ready” state until Weave is correctly installed and launched. If the above command does not work as expected, troubleshooting can be done with the $ weave stop and $ weave launch commands, and checking if Weave shows up in the list of cluster internal pods with $ sudo kubectl get po –all-namespaces
After following the guide, to test that everything is running correctly run:
1 2 3 |
$ sudo kubectl get nodes $ sudo kubectl get po --all-namespaces |
*If these commands give you a port error, try explicitly passing in a config using the –kubeconfig flag as is done in the screenshots below
The output should show a list of your nodes all in the ready state like so and all cluster-internal pods as running:
III. Making your app a Docker image
Now that your Kubernetes cluster is up and running, let’s deploy an app! We’ll be working with a simple Python script that takes data from a BMP280 temperature pressure and altitude sensor and displays it on an SSD1306 OLED display (both are wired to the i2c pins on the pi using connectors that have been soldered to be split into two). The steps of packaging a Python script into a Docker image are listed below, and the Python script and finished dockerfile can all be found here (https://github.com/KesiSound/BMPDisplay), and a Dockerhub repository for the project can be found here (https://cloud.docker.com/u/vmwarecode/repository/docker/vmwarecode/bmp280). The vmwarecode dockerhub page contains more repositories for other sensor apps you can deploy including variants for DHT11, Infrared, and Digital Touch sensors ()https://hub.docker.com/u/vmwarecode).
Creating a Dockerfile
The first step in turning our script into a fully independent containerized Docker image is making a Dockerfile. A Dockerfile essentially lays out the dependencies and parent image to include when packaging our app. It should be created with no file extension and must be put in the same directory as your Python code. For this project, our Dockerfile looks like so:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
FROM arm32v7/python:3-slim-stretch RUN apt-get update -y RUN apt-get install -y build-essential zlib1g zlib1g-dev libjpeg-dev RUN pip install Pillow RUN pip install Adafruit_SSD1306 RUN pip3 install adafruit-circuitpython-bmp280 ADD BMPDisp.py / CMD [ "python", "./BMPDisp.py"] |
Building a Docker Image
Our Dockerfile is ready to go, now it’s time to build it!
1 |
$ sudo docker build -t [buildname] . |
Running this should make Docker start building an image, following all the steps we laid out in the Dockerfile. To make sure the Docker image has built correctly, try running it through Docker with:
1 |
$ sudo docker run --privileged [buildname] |
*We are using the –privileged flag to run this since without it i2c can’t be enabled by the containers
Uploading to the Dockerhub repository
Make a Dockerhub account, and then login with:
1 |
$ docker login |
Since we have built the image locally, running $ docker images should show the image ID that we can use to tag it and push it onto the repository
1 2 3 |
$ docker tag [image id] [username]/[repository]:[tag] $ docker push [username]/[repository] |
To check that everything uploaded correctly, go to https://cloud.docker.com/repository/docker/[yourusername]/[repository]
To run your Docker image from the repository on other machines, use
1 |
$ sudo docker run --privileged [yourusername]/[repository]:[tag] |
If you want to run the image from the repository for this project, this would be
1 |
$ sudo docker run --privileged vmwarecode/bmp280:v1 |
IV. Deploying your workload to Kubernetes
Kubernetes uses YAML files to generate pods and assign workloads to nodes. For the purposes of this project, we want to show that our script can run on individual nodes as specified by our YAML file. On the master, create a blank YAML file and copy and paste the contents below into that file.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
apiVersion: v1 kind: Pod metadata: name: bmp-disp spec: nodeName: [name of the node you want to run the image on] containers: - name: bmp-disp image: vmwarecode/bmp280:v1 imagePullPolicy: IfNotPresent securityContext: capabilities: add: ["SYS_ADMIN"] privileged: true allowPrivilegeEscalation: true |
*Our YAML file creates a privileged pod, this is necessary for i2c to run, however this is not best as a security practice
Every node is automatically assigned a label “name” based on its hostname. Change nodeName to be the hostname of whichever Pi you want to run the image on. This YAML file tell’s Kubernetes to create a pod named bmp-disp and and run the workload from our Docker repository onto the node we specify.
If you want to run your application on all your worker nodes, you can instead have your YAML file declare a Daemonset instead of a pod. An example implimentation of this can be found at https://github.com/KesiSound/BMPDisplay/blob/master/daemonset.yaml
To actually create the pod, run
1 |
$ kubectl create -f [filename].yaml |
To check if the pod is running, use
1 |
$ kubectl get pods |
If the pod is up and running, you should see our script running on the worker node you specified in the YAML file!
If this is not the case, using $ kubectl describe pods bmp-disp should give you more information about why the pod might be failing, as well as $ kubectl logs bmp-disp, which gives you the log of your Pod.
That’s it! You now have a Kubernetes cluster running your workload on a Raspberry Pi 🙂