Kubernetes at home

 originally published in Linkedin.

The system admin perspective, with the deployment of a cluster with K3s.

The mini-PC, my single node Kubernetes cluster

 

 

Around the year 2000 it was very common to virtualize all the enterprise computation environments. Since then, many servers and network devices disappeared from the CPDs of many companies and the rows of racks emptied. We went from rows of racks full of equipment to today's almost empty cabinets. It was our decision to maintain the old classic technology, with bare metal servers doing all the work, or to move our workloads to virtualized environments. Some slowly, but we all virtualized our systems and now we see the virtualization as the easiest and more optimized way to operate and maintain our infrastructure fulfilling more than enough the levels of service.

Following, the next logical step was to take advantage of this abstraction and we could even move our workloads to the cloud, simply moving there the virtual machines, although for many of us the cloud services were still expensive and normally the sum doesn't add up. This space reduction enabled our original production environments to reproduce and the number of environments to manage increased, mixing development, pre-production, and production environments, in almost all sites with highly procedural methodologies. From the simplicity of the original bare metal CPD to the same systems multiplied by the functions to solve.

And again, the economy of scale pushed us to be more efficient. Since some years we are seeing other ways for improving the efficacy of our environments. In the beginning there will be some IT providers that will bring us their offers with containerized applications, hybrid or in cloud, maybe for EKS (Amazon), GKE (Google), AKS (Azure) or any other, because for them is much more effective to manage all this way. Then we will change some of our applications, because so it is much more effective and, for example, enables us to maintain some legacy applications, as we did with the virtualization. We are again in a process of change to these new and more optimized environments, so we must start learning these news matters that we are going to start adopting, sometimes without even realizing it.

So, I started reading a book about docker, the starting point of these new technologies, where they expanded the isolation capabilities of the Linux Kernel to an unsuspected level. Docker manages containers from the simplest one, through tools as docker-compose, that makes very ease the deploy/management of an application and brings us to production grade tools like Docker Swarm, for managing clusters with arrays of nodes running containers from different applications.

I started working with Podman, now the open-source tool for working with containers in the Linux world, because it was the out-of-the-box solution in my Redhat developer installation. Although we can create

alias docker=podman 

and we can execute many commands exactly as in docker, even docker-compose, it is not a full replacement of the full Docker product. So, in the Redhat world, if we need to use fully this type of technologies we must think in other orchestrators, and so we come to the world of Kubernetes (my target), and this word appears in the title of the book that I am reading now.

I wanted to know Kubernetes, so I needed to create my first single-node Kubernetes cluster. I like to have at home a PC always on and connected to the Internet, so I can ssh to it remotely and check things from a different address and to have some other stuff. The last year I bought for less than 200€ a mini-PC for these tasks. It is one quadcore Celeron, 128GB SSD, 8GB RAM, that from time to time makes a bit of noise not annoying (but the next one will be fan-less) to which I added an old SSD that I had at home. It came with Windows 10 preinstalled, so my first task was to remove Windows and install there the latest Redhat edition, without the graphical environment and register it with my RH developer’s subscription. In this moment I have only this PC and with this setup I will not have HA, but I don't want to expend more in technology in this moment. My first attempt was with the Kubernetes full deployment, I filled almost all requirements, but after starting the scripts they stopped in the first step: I had 7.5GB of RAM only ... I needed another product with fewer requirements

These were my alternatives:

  • minikube: it is for learning, not for production. In the end I want to have some other stuff running there.
  • MicroK8s: my system is Redhat, so I need to install snap as requirement. It's OK but I prefer not to change the default sources. Maybe in the future.
  • K3s, a lightweight Kubernetes distribution created by Rancher (SuSE) and certified with the CNCF. https://www.cncf.io/certification/software-conformance/. And some of their flavors:
    • K3d: K3s in docker.
    • K3os: Alpine based distribution with K3s preinstalled. I have Redhat running in my box and I don't want to reinstall now the OS.

So, for me it was clear: K3s. Simple, open source, very small in size and with all the components integrated in the same executable, in addition it is an official Certified Kubernetes Distribution. It fulfilled almost all my requirements: low hardware, it runs in Rasberry Pi4, and almost it is as working with a real Kubernetes cluster.

In the future I could buy another little PC and after making some adjustments I could expand my cluster to another node, making it HA, or even expand it to some cloud provider where I could run one virtual Linux where I can deploy the third k3s node and expand my cluster to the cloud. I could also install Longhorn in the containers and create a file system with some of my nodes. In any case, with this setup I have learned many things related to these technologies and I have still much more to learn, and this is the best :-)

So, let’s go.

 

Preinstallation

It is recommended to disable the swap:

[root@lan manual]# free -
             total       used       free     shared buff/cache  available
Mem:         7.5Gi      2.9Gi      645Mi       26Mi      4.0Gi      4.3Gi
Swap:        7.7Gi      1.0Mi      7.7Gi
[root@lan manual]# swapoff -a
[root@lan manual]# free -h
             total       used       free     shared buff/cache  available
Mem:         7.5Gi      2.9Gi      684Mi       26Mi      4.0Gi      4.3Gi
Swap:           0B         0B         0Bh

And don't forget to comment the swap line in the /etc/fstab file:

#/dev/mapper/rhel-swap  none                   swap   defaults       0 0

Installation

Very very easy to install. As root:

curl -sfL https://get.k3s.io | 
       INSTALL_K3S_EXEC='server --cluster-init --write-kubeconfig-mode=644' \
       sh -s -\

 I like to have the kubectl bash completion activated so I execute:

kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl

And to activate it now, without session restarting:

source < (kubectl completion bash)

For more details go to: 

https://documentation.suse.com/trd/kubernetes/single-html/kubernetes_ri_k3s-sles/index.html 

 

After installing the software, we will have the K3s service started and enabled and some other stuff:

  • kubectl: for management of the objects of our new cluster: container, pods, services, deployments, ingress, RBAC, ...
  • k3s-killall.sh and k3s-uninstall.sh for removing the installations and bringing up our server as it was before installing k3s.
  • The kubeconfig file, in /etc/rancher/k3s/k3s.yaml, fully configured.

/usr/local/bin/
lrwxrwxrwx. 1 root root   3 Oct 6 10:10 crictl -> k3s
lrwxrwxrwx. 1 root root   3 Oct 6 10:10 ctr -> k3s
-rwxr-xr-x. 1 root root 67M Oct 6 10:07 k3s
-rwxr-xr-x. 1 root root 2.1k Oct 6 10:10 k3s-killall.sh
-rwxr-xr-x. 1 root root 1.4k Oct 6 10:10 k3s-uninstall.sh
-rwxr--r--. 1 root root 21M Oct 1 14:06 kompose
lrwxrwxrwx. 1 root root   3 Oct 6 10:10 kubectl -> k3s
(...)

 From now on, we can start containers saving the declarative yaml file in the folder /var/lib/rancher/k3s/server/manifests


Firewall

In many web sites say that the next step is to disable de firewall, but I disable the firewall for testing only when something doesn’t work as expected, so I allow explicitly the following ports.

sudo firewall-cmd --permanent --add-port=6443/tcp
sudo firewall-cmd --permanent --add-port=2379-2380/tcp
sudo firewall-cmd --permanent --add-port=10250/tcp
sudo firewall-cmd --permanent --add-port=10251/tcp
sudo firewall-cmd --permanent --add-port=10252/tcp
sudo firewall-cmd --permanent --add-port=10255/tcp
sudo firewall-cmd --permanent --add-port=8472/udp

And move the k3s internal interface (cni0) to a new trusted zone with all access allowed:

firewal-cmd --permanent --zone=trusted --remove-interface=cni0
firewall-cmd --reload

In any case I prefer to make it the old way: look at the firewall logs and open what it’s really needed, nothing more:

sudo firewall-cmd --set-log-denied=unicast
sudo firewall-cmd --reload
sudo journalctl -f -x -el

And work with commands as

sudo firewall-cmd --permanent --add-port=n/tcp

 

 Deployment of the applications

I deploy all with make, the traditional old way: https://github.com/JonasAlvarez/k8s-home . In the end, after cloning my repository, it is as simple as execute:

make deploy-all

Remember to fill the private information in the secret files, and then all the containers will be running. You can check it with, for example:

kubectl get pods -o wide

I have my own domain and one free wildcard certificate (obtained in Let's Encrypt). Now I can access my Kubernetes dashboard in my LAN in the URL https://dashboard.sysadm.org/, the Traefik ingress in https://traefik.sysadm.org/dashboard/ , and the other sites in their respective URLs.

In this moment these are the deployed applications in my mini-PC (the details are in the Makefile in my repository):

  • Pihole, an AdBlock for my home devices.
  • Kubernetes-dashboard, to manage and monitor the cluster.
  • Nginx, a web server with my static content.
  • mariadb, backend database for use in some other applications.
  • A Bittorrent client, for downloading some cultural stuff.
  • Plex, son I can play directly in my TV.
  • homer, interface with all my applications.

In a few days I want to add some other applications

  • Openswan, as VPN, so I can connect and navigate with my adblocker.
  • Owncloud, to have available some shared folders in my home network.
  • Wordpress: one wordpress pre-stage site, for generating contents for my static web.
  • Portainer, universal container manager, free up to 5 nodes. Maybe I will use Rancher.
  • Prometheus: for cluster alarm and monitoring, with fancy screens with Grafana.

I could install all this on the same PC, but there would be many applications running in the same server, and I don’t like very much this type of setups, the dependencies will turn it impossible to manage, the security of one will affect the security of the others, ... The other option will be to isolate them, to install all the applications in their respective virtual machines, but the PC will be overloaded, and it will not be much efficient. The best solution is to execute all containerized. Now my mini-PC is not loaded, I have running all these applications and I have a lot of room to deploy many other. 

 

Changing the Local Kubernetes Context

Maybe you are working from another terminal and it is more convenient to control the cluster from your own local terminal without having to ssh into the master node, for this you must copy the contents of /etc/rancher/k3s/k3s.yaml from the main node and add them to the local ~/.kube/config. Then you can replace “localhost” with the IP or name of the main K3s node and with kubectl set-context <yourcontext> you change the context in your session.


Following: Where are the logs of my cluster?

 

 

 

Comments

Popular posts from this blog

Where are the logs of my K3s?