posted

0 Comments

Kubernetes on a Raspberry Pi Cluster

By Kesi Soundararajan

I. Introduction

Kubernetes is an open source, container orchestration tool that automates the management of containerized applications. Despite its recent 2015 release, Kubernetes has quickly become an industry standard for developers who use containers in their production. To understand more about how Kubernetes works in practice, I wanted to get it up and running on a cluster of Raspberry Pi’s (Pi 3, Model B+), and use it to deploy a simple Python app that pulls sensor data from a BMP280 and displays it on a cheap OLED display.

II. Setting up Kubernetes

Kasper Nissen has a great guide that walks you through installing and initializing Kubernetes on your Raspberry Pi’s. To avoid repeating too much of what he says, the link can be found here (​https://kubecloud.io/setting-up-a-kubernetes-1-11-raspberry-pi-cluster-using-kubeadm-952bbda 329c8​). Some supplementary notes to the guide can be found below.

Notes from Kasper’s guide:

Before running the install.sh script that installs Docker and Kubernetes, you should check that your certificates are updated by running the command below.

Additionally, there’s no need to bother using the custom kubeadm_conf file the guide suggests when initializing Kubernetes. Instead, simply initialize the cluster with:

The guide also forgets an end quote in it’s command for installing Weave. The correct command should be:

Your nodes will not be in the “Ready” state until Weave is correctly installed and launched. If the above command does not work as expected, troubleshooting can be done with the ​$ weave stop​ and ​$ weave launch​ commands, and checking if Weave shows up in the list of cluster internal pods with ​$ sudo kubectl get po –all-namespaces

Another thing to keep in mind is that if you are attempting to join a new worker node to your cluster after initializing it, a new join token will be needed if more than 24 hours has passed since you initialized your cluster. New join tokens can be generated with:

This will spit out a new token that you can replace in your join command. Keep in mind that the discovery token ca-cert hash stays constant, so it’s worth noting down that hash somewhere after initializing Kubernetes in case you choose to add more Pi’s to your cluster down the line.

After following the guide, to test that everything is running correctly run:

*If these commands are not running (I encountered a port error), try explicitly passing in a config using the –kubeconfig flag

The output should show a list of your nodes all in the ready state like so and all cluster-internal pods as running:

 

III. Making your app a Docker image

Now that your Kubernetes cluster is up and running, it’s time to start deploying your app! In my case, I’m working with a simple Python script I wrote that takes data from a BMP280 temperature pressure and altitude sensor and displays it on an SSD1306 OLED display (both are wired to the pins on the pi using soldered connectors for SCL and SDA). I’ve packaged this into a Dockerhub repository that can be found at https://cloud.docker.com/repository/docker/kesisound/bmp-disp​, however for the sake of this blog post I’ll go through the steps of packaging a Python script into a Docker image. The python script and finished dockerfile can be found here (​https://github.com/KesiSound/BMPDisplay​).

*Note: My display is on i2c address 0x3D, use ​$ i2cdetect -y 1 ​to make sure your display’s i2c address is the same if you are trying to replicate the script at home. If your display’s i2c address is different, you can go into BMPDisp.py and change the parameter of disp to whatever your address is.

Creating our Dockerfile

The first step in turning our script into a fully independent containerized Docker image is making a Dockerfile. A Dockerfile essentially lays out the dependencies and parent image we want to include when packaging our app. It should be created with no file extension and must be put in the same directory as your Python code. For this project, our Dockerfile looks like so:

Building Docker Image

Our Dockerfile is ready to go, now it’s time to build it!

Running this should make Docker start building an image, following all the steps we laid out in the Dockerfile. To make sure the Docker image has built correctly, try running it through Docker with:

*We are using the –privileged flag to run this since without it i2c can’t be enabled by the containers

Upload image to Dockerhub repository

Make a Dockerhub account, and then login with:

Since we have built the image locally, running ​$ docker images​ should show the image ID

that we can use to tag it and push it onto the repository

To check that everything uploaded correctly, go to https://cloud.docker.com/repository/docker/[yourusername]/[repository​]

To run your Docker image from the repository on other machines, use

If you want to run the image from my repository, this would be

IV. Deploying your workload to Kubernetes

Kubernetes uses YAML files to generate pods and assign workloads to nodes. For the purposes of this project, we want to show that our script can run on individual nodes as specified by our YAML file. On the master, create a blank YAML file and copy and paste the contents below into that file.

*Our YAML file creates a privileged pod, this is necessary for i2c to run however it is not best security practice

Every node is assigned a label “name” based on its hostname. Change the nodeName to be the hostname of whichever Pi you want to run the image on in the YAML file. In essence, this YAML file is telling Kubernetes to create a pod named bmp-disp and and run the workload from our Docker repository onto the node we specify.

To actually create the pod, run

To check if the pod is running, use

If the pod is up and running, you should see our script running on the worker node you specified in the YAML file!

If this is not the case, using $ kubectl describe pods bmp-disp should give you more information about why the pod might be failing, as well as $ kubectl logs bmp-disp

That’s it! You now have a Kubernetes cluster running your workload on a Raspberry Pi 🙂

V. Additional Notes

Don’t have two or more Raspberry Pi’s laying around the house? Minikube is a super cool Kubernetes tool that allows you to create virtualized Kubernetes clusters using a single machine. While it won’t run on a Raspberry Pi due to virtualization constraints, Minikube will most likely run on your personal computer and is a really quick way to get started working with Kubernetes clusters. More information on Minikube can be found at https://kubernetes.io/docs/tutorials/hello-minikube/​.