How to Announcements Community VMware {code}

Kubernetes on a Raspberry Pi Cluster

Kubernetes on a Raspberry Pi Cluster

By Kesi Soundararajan

I. Introduction

Kubernetes is an open source, container orchestration tool that automates the management of containerized applications. Despite its recent 2015 release, Kubernetes has quickly become an industry standard for developers who use containers in their production. To understand more about how Kubernetes works, for this VMWare{code} lab we will be using a Kubernetes cluster created with Raspberry Pi’s (Pi 3, Model B+) to deploy a simple Python app that pulls sensor data from various IoT sensors and displays it on a cheap OLED display. VMWare{code} is VMware’s developer community, with sample exchanges, meetups, code talks, and labs like this one! You can check out more VMware{code} content at https://code.vmware.com/home.

II. Setting up Kubernetes

Kasper Nissen has a great guide that walks you through installing and initializing Kubernetes on your Raspberry Pi’s (https://kubecloud.io/setting-up-a-kubernetes-1-11-raspberry-pi-cluster-using-kubeadm-952bbda329c8). The steps laid out here are taken mostly from Kasper’s guide with a few minor tweaks.

Start by flashing the latest copy of Raspbian from https://www.raspberrypi.org/downloads/raspbian/ to each of your Pi’s SD cards (Etcher works well to do this). Since we do not want to be plugging and unplugging HDMI cables to access our Pi’s, we need to enable SSH by running $ touch /Volumes/boot/ssh after flashing Raspbian to the SD cards. This can also be accomplished by creating a blank file named “ssh” in the SD card’s main directory.

To SSH into the Pi’s from your computer, run

The Pi’s will need new hostnames as well as static IP’s before we continue. Kasper’s guide includes a script for doing so which can be installed with the following commands:

And copying in

(This is all taken from Kasper’s original blog, the link to this script can be found at https://gist.github.com/kaspernissen/473806621f76c81abd07cd801b686cfa#file-hostname_and_ip-sh)

To run the script:

We are now ready to install Docker and Kubernetes on our Pi’s. Once again, Kasper’s guide provides us with a script for this. However, before using the script, you should check that your certificates are updated by running the command below.

Like before, to create the script we use

And copy and paste in:

(https://gist.github.com/kaspernissen/1359aa67395302c6eb064228caa52d1d#file-install-sh)

To run the script:

Repeat this process for all your Pi’s until they each have their own unique static IP address and hostname with Docker and Kubernetes installed.

With all the prerequisites in place, we can use our master node to initialize a Kubernetes cluster.

After initializing the cluster, there will be a token created for worker nodes to join the cluster like below:

Start by running

and then run the join command on each of your worker nodes. Keep in mind that if you are attempting to join a new worker node more than 24 hours after your cluster has been initialized, a new join token will be needed. New join tokens can be generated with:

This will spit out a new token that you can replace in your join command. The discovery token ca-cert hash stays constant, so it’s worth noting it down somewhere in case you choose to add more Pi’s to your cluster down the line.

To check that all your nodes have joined the cluster successfully, run

The names of all your nodes should appear, however they will all be in the “Not Ready” state until a container network like Weave is installed. If all your nodes have joined the cluster, it’s time to install weave with

Your nodes will not be in the “Ready” state until Weave is correctly installed and launched. If the above command does not work as expected, troubleshooting can be done with the $ weave stop and $ weave launch commands, and checking if Weave shows up in the list of cluster internal pods with $ sudo kubectl get po –all-namespaces

After following the guide, to test that everything is running correctly run:

*If these commands give you a port error, try explicitly passing in a config using the –kubeconfig flag as is done in the screenshots below

The output should show a list of your nodes all in the ready state like so and all cluster-internal pods as running:

III. Making your app a Docker image

Now that your Kubernetes cluster is up and running, let’s deploy an app! We’ll be working with a simple Python script that takes data from a BMP280 temperature pressure and altitude sensor and displays it on an SSD1306 OLED display (both are wired to the i2c pins on the pi using connectors that have been soldered to be split into two). The steps of packaging a Python script into a Docker image are listed below, and the Python script and finished dockerfile can all be found here (https://github.com/KesiSound/BMPDisplay), and a Dockerhub repository for the project can be found here (https://cloud.docker.com/u/vmwarecode/repository/docker/vmwarecode/bmp280). The vmwarecode dockerhub page contains more repositories for other sensor apps you can deploy including variants for DHT11, Infrared, and Digital Touch sensors ()https://hub.docker.com/u/vmwarecode).

Creating a Dockerfile

The first step in turning our script into a fully independent containerized Docker image is making a Dockerfile. A Dockerfile essentially lays out the dependencies and parent image to include when packaging our app. It should be created with no file extension and must be put in the same directory as your Python code. For this project, our Dockerfile looks like so:

Building a Docker Image

Our Dockerfile is ready to go, now it’s time to build it!

Running this should make Docker start building an image, following all the steps we laid out in the Dockerfile. To make sure the Docker image has built correctly, try running it through Docker with:

*We are using the –privileged flag to run this since without it i2c can’t be enabled by the containers

Uploading to the Dockerhub repository

Make a Dockerhub account, and then login with:

Since we have built the image locally, running $ docker images should show the image ID that we can use to tag it and push it onto the repository

To check that everything uploaded correctly, go to https://cloud.docker.com/repository/docker/[yourusername]/[repository]

To run your Docker image from the repository on other machines, use

If you want to run the image from the repository for this project, this would be

 

IV. Deploying your workload to Kubernetes

Kubernetes uses YAML files to generate pods and assign workloads to nodes. For the purposes of this project, we want to show that our script can run on individual nodes as specified by our YAML file. On the master, create a blank YAML file and copy and paste the contents below into that file.

*Our YAML file creates a privileged pod, this is necessary for i2c to run, however this is not best as a security practice

Every node is automatically assigned a label “name” based on its hostname. Change nodeName to be the hostname of whichever Pi you want to run the image on. This YAML file tell’s Kubernetes to create a pod named bmp-disp and and run the workload from our Docker repository onto the node we specify.

If you want to run your application on all your worker nodes, you can instead have your YAML file declare a Daemonset instead of a pod. An example implimentation of this can be found at https://github.com/KesiSound/BMPDisplay/blob/master/daemonset.yaml

To actually create the pod, run

To check if the pod is running, use

If the pod is up and running, you should see our script running on the worker node you specified in the YAML file!

If this is not the case, using $ kubectl describe pods bmp-disp should give you more information about why the pod might be failing, as well as $ kubectl logs bmp-disp, which gives you the log of your Pod.

That’s it! You now have a Kubernetes cluster running your workload on a Raspberry Pi 🙂

 

Comments

2 comments have been added so far

  1. Hi,
    I have followed the instructions and had some issues (sure due to new releases after the blogpost). As I was able to solve all of them, I thought to share it, just in case it can useful to somebody:
    – I did it with only 1 raspberry: I untainted the master after deploying the CNI:
    $ kubectl taint nodes –all node-role.kubernetes.io/master-
    – After apt-updates I had a message often requesting a reboot, due to some pending kernel update. The message shows a version change from v7+ to v8+. It just was annoying, as no other effects appeared. After rebooting it several times, it continued showing up. In fact v7 and v8 refers to the architecture 32b or 64b respectively, which is a major change and not something upstream – so I ignore it.
    – After docker installation, changing cgroup driver and setting cgroups for cpu and memory, “docker info” keep showing 3 warnings at the end. These have no effect (cosmetic only). I use “cat /proc/cgroups” to check cgroups is enabled for mem and cpu
    – To run the BMPDisp.py script, I run all commented lines in the beginning as they appear, but not all refer to python v3. Using “python3” instead of “python”, and “pip3” instead of “pip”. I didn’t notice the need of this change and have added some libs by solving errors appearing. These addicional libs were:
    $ sudo apt-get install -y build-essential zlib1g zlib1g-dev libjpeg-dev
    $ sudo pip3 install Pillow
    $ sudo pip3 install Adafruit_SSD1306
    $ sudo apt-get install libopenjp2-7-dev
    $ sudo apt-get install libtiff5
    – I2C Port: My BMP280 didn’t uses the standard 77 port address. I had to discover it (after several errors) wit the i2cdetect tool (“i2cdetect -y 1”). This shows a matrix and the ports(s) in use for the bus 1. In my case, the address was 76. Knowing that, I had to modify the BMPDisp.py file to replace “bmp280 = adafruit_bmp280.Adafruit_BMP280_I2C(i2c)” by “bmp280 = adafruit_bmp280.Adafruit_BMP280_I2C(i2c, address = 0x76)”
    – Dockerfile: I needed to add more libs to the apt-get install (libopenjp2-7-dev and libtiff5), needed to change “pip” by “pip3”, add a RUN line (“RUN pip3 install RPi.GPIO”) and finally modify CMD line to use “python3”
    – Yaml: As all yaml files, the indent spaces are important to parse successfully the file. Copying and pasting the one shown in the blog, don’t work. Just modifying the extra-spaces with some Pod.yaml sample is easy to fix.

  2. I follow these steps and found some issues (some for sure due to new releases available). I share the solutions just in case can help someone.
    -I had only 1 Raspberry, but enough to run k8s. I untainted the master to run pods. “kubectl taint nodes –all node-role.kubernetes.io/master-”
    -Pending kernel upgrade to move from v7+ to v8+: Don’t worry. Appears repeatedly. Rebooting it changes nothing. v7 and v8 refers to architecture 32b and 64b respectively.
    -Cgroups configuration: Configured cgroups trying to clean all warnings that “docker info” shows at the end, but 3 keep appearing. Instead check “cat /procs/cgroups” to see cgroups for cpu and mem are enabled.
    -Python libs to import: The BMPDisp.py file has initial lines commented which are prerequisites to run it. I needed to add more libs, perhaps because I realize late to change all “python” and “pip” commands in these lines by “python3″ and “pip3”. The additional libs I add were:
    sudo apt-get install -y build-essential zlib1g zlib1g-dev libjpeg-dev
    sudo pip3 install Pillow
    sudo pip3 install Adafruit_SSD1306
    sudo apt-get install libopenjp2-7-dev
    sudo apt-get install libtiff5
    -I2C port problems: My BMP280 didn’t use the standard 77. I had to discover it using the i2cdtect (“i2cdetect -y 1”) which showed me a matrix and the address in use for port 1. In my case the address was 76. I had to modify one line in the BMPDisp.py file —> bmp280 = adafruit_bmp280.Adafruit_BMP280_I2C(i2c, address = 0x76)
    -Dockerfile: Some changes to Dockerfile in my case: 1) Add libopenjp2-7-dev libtiff5 libs to the apt-get install line, modify “pip” by “pip3”, add a line “RUN pip3 install RPi.GPIO” and modify CMD line to replace “python” by “python3”
    -Yaml: The indent spaces aren’t ok if you copy and paste as it appears in the blog. Not difficult to solve looking any pod.yaml sample.

Leave a Reply

Your email address will not be published. Required fields are marked *