Science and technology

Run Kubernetes on a Raspberry Pi with k3s

For a very long time, I have been considering constructing a Kubernetes cluster out of a stack of cheap Raspberry Pis. Following together with numerous tutorials on the internet, I used to be in a position to get Kubernetes put in and dealing in a 3 Pi cluster. However, the RAM and CPU necessities on the grasp node overwhelmed my Pi. This brought on poor efficiency when doing numerous Kubernetes duties. It additionally made an in-place improve of Kubernetes unimaginable.

As a consequence, I used to be very excited to see the k3s project. K3s is billed as a light-weight Kubernetes to be used in resource-constrained environments. It can be optimized for ARM processors. This makes working a Raspberry Pi-based Kubernetes cluster far more possible. In truth, we’re going to create one on this article.

Materials wanted

To create the Kubernetes cluster described on this article, we’re going to want:

  • At least one Raspberry Pi (with SD card and energy adapter)
  • Ethernet cables
  • A change or router to attach all our Pis collectively

We can be putting in k3s from the web, so they’ll want to have the ability to entry the web by the router.

An overview of our cluster

For this cluster, we’re going to use three Raspberry Pis. The first we’ll identify kmaster and assign a static IP of 192.168.zero.50 (since our native community is 192.168.zero.zero/24). The first employee node (the second Pi), we’ll identify knode1 and assign an IP of 192.168.zero.51. The remaining employee node we’ll identify knode2 and assign an IP of 192.168.zero.52.

Obviously, when you have a special community structure, you might use any community/IPs you’ve out there. Just substitute your individual values anyplace IPs are used on this article.

So that we do not have to maintain referring to every node by IP, let’s add their host names to our /and many others/hosts file on our PC.

echo -e "192.168.0.50tkmaster" | sudo tee -a /and many others/hosts
echo -e "192.168.0.51tknode1" | sudo tee -a /and many others/hosts
echo -e "192.168.0.52tknode2" | sudo tee -a /and many others/hosts

Installing the grasp node

Now we’re prepared to put in the grasp node. The first step is to put in the most recent Raspbian picture. I’m not going to clarify that right here, however I’ve a detailed article on how to do that if you happen to want it. So please go set up Raspbian, allow the SSH server, set the hostname to kmaster, and assign a static IP of 192.168.zero.50.

Now that Raspbian is put in on the grasp node, let’s boot our grasp Pi and ssh into it:

ssh pi@kmaster

Now we’re prepared to put in k3s. On the grasp Pi, run:

curl -sfL https://get.k3s.io | sh -

When the command finishes, we have already got a single node cluster arrange and working! Let’s test it out. Still on the Pi, run:

sudo kubectl get nodes

You ought to see one thing just like:

NAME     STATUS   ROLES    AGE    VERSION
kmaster  Ready    grasp   2m13s  v1.14.Three-k3s.1

Extracting the be a part of token

We need to add a few employee nodes. When putting in k3s on these nodes we’ll want a be a part of token. The be a part of token exists on the grasp node’s filesystem. Let’s copy that and put it aside someplace we are able to get to it later:

sudo cat /var/lib/rancher/k3s/server/node-token

Installing the employee nodes

Grab some SD playing cards for the 2 employee nodes and set up Raspbian on every. For one, set the hostname to knode1 and assign an IP of 192.168.zero.51. For the opposite, set the hostname to knode2 and assign an IP of 192.168.zero.52. Now, let’s set up k3s.

Boot your first employee node and ssh into it:

ssh pi@knode1

On the Pi, we’ll set up k3s as earlier than, however we’ll give the installer further parameters to let it know that we’re putting in a employee node and that we might like to affix the present cluster:

curl -sfL http://get.k3s.io | K3S_URL=https://192.168.0.50:6443
K3S_TOKEN=join_token_we_copied_earlier sh -

Replace join_token_we_copied_earlier with the token from the “Extracting the join token” part. Repeat these steps for knode2.

Access the cluster from our PC

It’d be annoying to must ssh to the grasp node to run kubectl anytime we wished to examine or modify our cluster. So, we need to put kubectl on our PC. But first, let’s get the configuration data we’d like from our grasp node. Ssh into kmaster and run:

sudo cat /and many others/rancher/k3s/k3s.yaml

Copy this configuration data and return to your PC. Make a listing for the config:

mkdir ~/.kube

Save the copied configuration as ~/.kube/config. Now edit the file and alter the road:

server: https://localhost:6443

to be:

server: https://kmaster:6443

For safety goal, restrict the file’s learn/write permissions to simply your self:

chmod 600 ~/.kube/config

Now let’s set up kubectl on our PC (if you happen to do not have already got it). The Kubernetes website has instructions for doing this for numerous platforms. Since I am working Linux Mint, an Ubuntu by-product, I will present the Ubuntu directions right here:

sudo apt replace && sudo apt set up -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" |
sudo tee -a /and many others/apt/sources.listing.d/kubernetes.listing
sudo apt replace && sudo apt set up kubectl

If you are not acquainted, the above instructions add a Debian repository for Kubernetes, seize its GPG key for safety, after which replace the listing of packages and set up kubectl. Now, we’ll get notifications of any updates for kubectl by the usual software program replace mechanism.

Now we are able to take a look at our cluster from our PC! Run:

kubectl get nodes

You ought to see one thing like:

NAME     STATUS  ROLES   AGE   VERSION
kmaster  Ready   grasp  12m   v1.14.Three-k3s.1
knode1   Ready   employee  103s  v1.14.Three-k3s.1
knode1   Ready   employee  103s  v1.14.Three-k3s.1

Congratulations! You have a working Three-node Kubernetes cluster!

The k3s bonus

If you run kubectl get pods –all-namespaces, you will note some further pods for Traefik. Traefik is a reverse proxy and cargo balancer that we are able to use to direct visitors into our cluster from a single entry level. Kubernetes permits for this however does not present such a service immediately. Having Traefik put in by default is a pleasant contact by Rancher Labs. This makes a default k3s set up absolutely full and instantly usable!

We’re going to discover utilizing Traefik by Kubernetes ingress guidelines and deploy every kind of goodies to our cluster in future articles. Stay tuned!

Most Popular

To Top