
Raspberry Pi Kubernetes Cluster with Docker
Recently setup a small four node Raspberry Pi cluster for the purposes of playing with Kubernetes and getting some much needed exposure with it.
I set out with a simple mission, get an nginx (website) docker container running within the Kubernetes node. This would give me exposure to setting up Kubernetes (master with three nodes) as well as managing Docker containers within or alongside K8 (Kubernetes).
Raspberry Pi Cluster
I have four raspberry pi’s setup on the network which all have a base Raspbian image. I set their hostnames and statically assigned IP’s as follows:
Pi 3B | host1 | 192.168.0.201 | worker |
Pi 3B | host2 | 192.168.0.202 | worker |
Pi 4 | host3 | 192.168.0.203 | worker |
Pi 4 | host4 | 192.168.0.204 | master |
Personally I’d say the minimum to play with this and truly test things is going to be three Raspberry Pi’s. A master with two worker nodes will allow you to see how to scale a container across multiple nodes and what happens when you lose a node.
Install Docker
For my purposes I installed Docker on my three “worker” Raspberry Pi’s. I accomplished this with a single command from the Raspberry Pi Official Blog – https://www.raspberrypi.org/blog/docker-comes-to-raspberry-pi/
On each of the workers run:
curl -sSL https://get.docker.com | sh
This should end up looking something like this:

Lightweight Kubernetes
Since we are working with Raspberry Pi’s we are going to use a lightweight version of Kubernetes called “K3s”. You can learn more about K3s here: https://k3s.io/
For further reading on K3s please see the docs here: https://rancher.com/docs/k3s/latest/en/
Follow the Quick Start Guide to install K3s on the Master first:
K3s provides an installation script that is a convenient way to install it as a service on systemd or openrc based systems. This script is available at https://get.k3s.io. To install K3s using this method, just run:
curl -sfL https://get.k3s.io | sh -
After running this installation:
- The K3s service will be configured to automatically restart after node reboots or if the process crashes or is killed
- Additional utilities will be installed, including
kubectl
,crictl
,ctr
,k3s-killall.sh
, andk3s-uninstall.sh
- A kubeconfig file will be written to
/etc/rancher/k3s/k3s.yaml
and the kubectl installed by K3s will automatically use it
Once the master has been installed we need to retrieve the K3S token that will be used as the node token when adding workers.
We can view this token with the following:
sudo cat /var/lib/rancher/k3s/server/node-token

Install K3s on Workers:
To install on worker nodes and add them to the cluster, run the installation script with the K3S_URL
and K3S_TOKEN
environment variables. Here is an example showing how to join a worker node:
curl -sfL https://get.k3s.io | K3S_URL=https://myserver:6443 K3S_TOKEN=mynodetoken sh -
Setting the K3S_URL
parameter causes K3s to run in worker mode. The K3s agent will register with the K3s server listening at the supplied URL. The value to use for K3S_TOKEN
is the token we retrieved earlier.
Note: Each machine must have a unique hostname. If your machines do not have unique hostnames, pass the K3S_NODE_NAME
environment variable and provide a value with a valid and unique hostname for each node.
Using the above command as a template and plugging in my specifics yields the following command for me to install on each worker:
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.0.204:6443 K3S_TOKEN=K10107aea9e0cf180e38a03e722505318aeafab3d819ccc8e657143ec1d9e4d3375::server:46d628642ff77cf79653806c2ea1d8df sh -
Check Cluster:
At this point we should have our master as well as three worker nodes. We can check this on our master by running the following command:
kubectl get nodes
pi@host4:~ $ kubectl get nodes
NAME STATUS ROLES AGE VERSION
host4 Ready master 15h v1.17.4+k3s1
host1 Ready <none> 15h v1.17.4+k3s1
host2 Ready <none> 124m v1.17.4+k3s1
host3 NotReady <none> 123m v1.17.4+k3s1
Deploy Docker NGINX Container to Cluster:
In order to deploy NGINX we will follow along with the Kubernetes deployment documentation examples.
Let’s start by creating a file (from CLI on Master), mysite.yaml with the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: mysite-nginx
labels:
app: mysite-nginx
spec:
replicas: 1
selector:
matchLabels:
app: mysite-nginx
template:
metadata:
labels:
app: mysite-nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: mysite-nginx-service
spec:
selector:
app: mysite-nginx
ports:
- protocol: TCP
port: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: mysite-nginx-ingress
annotations:
kubernetes.io/ingress.class: "traefik"
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: mysite-nginx-service
servicePort: 80
So what does that all mean? No clue, just do it. Seriously though let’s try to break it down some. First thing to note is this is broken into three seperate sections, technically it’s three yaml docs in one. The use of — specifies the start of a new yaml file, it’s important to make sure you have the — where specified.
The sections are broken down into: Deployment, Service and Ingress.
Within Deployment we have named our deployment mysite-nginx
with a app
label of mysite-nginx
as well. We have specified that we want one replica
which means there will only be one pod created. We also specified one container
, which we named nginx
. We specified the image
to be nginx
. This means, on deployment, K3s will download the nginx
image from DockerHub and create a pod from it. Finally, we specified a containerPort
of 80
, which just means that inside the container the pod will listen on port 80
.
Within Service we have named our service mysite-nginx-service
. We provided a selector
of app: mysite-nginx
. This is how the service chooses the application containers it routes to. Remember, we provided an app
label for our container as mysite-nginx
. This is what the service will use to find our container. Finally, we specified that the service protocol is TCP
and the service listens on port 80
.
Within Ingress we have named the ingress record mysite-nginx-ingress
. And we told Kubernetes that we expect traefik
to be our ingress controller with the kubernetes.io/ingress.class
annotation. In the rules
section, we are basically saying, when http
traffic comes in, and the path
matches /
(or anything below that), route it to the backend
service specified by the serviceName
mysite-nginx-service
, and route it to servicePort
80
. This connects incoming HTTP traffic to the service we defined earlier.
Finally we are ready to deploy the container!
kubectl apply -f mysite.yaml
You should see something like this:
pi@host4:~ $ kubectl apply -f mysite.yaml
deployment.apps/mysite-nginx created
service/mysite-nginx-service created
ingress.networking.k8s.io/mysite-nginx-ingress created
Now let’s check our pod status:
pi@host4:~ $ kubectl get pods
NAME READY STATUS RESTARTS AGE
mysite-nginx-6fd766c7b-j5v2g 0/1 ContainerCreating 0 11s
If you see a status of ContainerCreating
give it some time and run kubectl get pods
again. Typically, the first time, it will take a while because k3s has to download the nginx
image to create the pod. After a while, you should get a status of Running
.
After a little time we should see things as Running:
pi@host4:~ $ kubectl get pods
NAME READY STATUS RESTARTS AGE
mysite-nginx-6fd766c7b-j5v2g 1/1 Running 0 89s
Check the website. Fire up your browser and go to the IP address of your Master, in my case that is 192.168.0.204.

Recap – Where are we now?
So at this point we have deployed Kubernetes across multiple nodes. We have docker installed on our nodes also. We have defined a docker container (web server) and we now have it running on our cluster.
Depending on how many Raspberry Pi’s you have things may be slightly different for you. But one thing is true, our nginx docker container is only running on one single node. If you refer back to the deployment section of our yaml file you will recall we set our replicas
to 1. So if we power off the right node right now what will happen to our site?
If you said it will go down you are right. Despite this being a cluster we haven’t scaled our app.
I personally found the ease in which it is to scale (run the app on multiple nodes) using Kubernetes to be quite impressive.
Scale our App!
Let’s take a look at what I’m talking about and see how this is running on a single node.
pi@host4:~ $ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mysite-nginx 1/1 1 1 28m
By running this command we can see the app is only available on a single host. So if that host / node goes down, so does our site.
Since we have multiple nodes available to us, let’s try scaling our app to run on multiple nodes.
To scale this across a second node it is as easy as:
kubectl scale deployment mysite-nginx --replicas 2
When running this we should see something like the following:
pi@host4:~ $ kubectl scale deployment mysite-nginx --replicas 2
deployment.apps/mysite-nginx scaled
That’s it. You can now check and see that the app is scaled across multiple nodes:
pi@host4:~ $ kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
mysite-nginx 2/2 2 2 30m
So now you have to test this right! Previously we could unplug the single node and then try to go back to the site only to see that it was down. Now when we unplug that node the site remains reachable.