Where are the logs of my K3s?

 Originally published in LinkedIn.




After having my home k3s cluster up and running, one of the first tools that I need to have with any system is something that give me access to the generated logs, for solving the possible problems, for security, or just to see what is happening under the hood. Until now in all my K8s journey I have been doing searches in the logs generated in my kubernetes cluster with the simple visualizations generated by the kubernetes-dashboard or else with the kubectl command, for example:

kubectl -n pro-jellyfin logs services/jellyfin
kubectl -n pro-jellyfin logs deployments/jellyfin

That works well and has helped me to solve almost all the problems that appeared when deploying my cluster. But the results are not something fancy and good looking, so I wanted to deploy something to access these logs more conveniently.

There are many tools capable of fetching the logs, storing them and presenting them in a way that makes easy the searches and access. I have used (and feed and deploy) two of the most known, Splunk and the ELK stack, and I use my Splunk instance daily, but for my little system I needed something smaller, so after trying some days ago Prometheus and Grafana and seeing that their products are very common in the Kubernetes world, I founded Loki.

Loki, as we can read in the Grafana page, is a log aggregation system designed to store and query logs from all your applications and infrastructure. And the best of all: they have a free forever account in the cloud with 10k in metrics, 50GB in logs and 50GB in traces per month, with 14 days retention (I am not promoted). As these numbers are much larger that the logs I can generate in my little system, the deployment of a local Loki and Grafana will not give me nothing new, and for now I don't need nothing more advanced as SOAR or similar, with this setup on cloud I don't need to employ my little infrastructure for maintaining the monitoring tools and I can solve my necessity of log managing very easily. So I will deploy there my log monitoring and I will have all my logs available wherever I am. Let's go, it's really ease.

The first step is to register the free account in the Grafana web page. They will give us also a trial for 15 days, but we need the logs only, so no problem with the extra stuff: 

https://grafana.com/auth/sign-up/create-user/ 

After registering we must start feeding Loki (and all the other stuff, at least at the beginning). I create the namespace, of course with

kubectl create namespace pro-logs

in my case, but I will add it in my namespace generation folder (I deploy all my cluster with a simple make deploy-all, you will see it there).

There are many options to install the required containers : we can download the yaml files and install from them, we can use Helm, or even we can use the assistant that the Grafana portal provides. In this case I prefer to deploy all by following their detailed guide, step by step.

No alt text provided for this image

First in the Manage pre-built dashboards and rules section, we click on Install dashboards and rules. With this Grafana will include all dashboard and rules in our account. After this we open the Agent Configuration instructions.

In the first point of the dialog we can read the requirements: a k8s cluster, kubectl, helm, curl and envsubst. I suppose that all this is installed in your system. If not there are the instructions to install the missing part. In the second point of the dialog we must first change/set the namespace to our correct one, in my case, as we have see before, it is pro-logs.

And after this we will follow the dialog, and with copy/paste all the boxes from the clipboard to the console, until the end, it will be all done. One comment only: don't forget to replace "docker" with "cri" in the point 4 (Loki) if your setup is the same setup as mine (see https://www.linkedin.com/pulse/kubernetes-home-jonas-alvarez/), and also be careful with the namespace in all the pastes.

From this moment we will have all our logs in Grafana cloud and besides, at least at the beginning, we will see also the metrics. Now we can start making queries for the logs of the cluster and our pods, for example:

No alt text provided for this image
Querying the logs of my nextcloud instance

I like to have the option to deploy and uninstall all my containers with a simple command, make in my case, so you can see the deployment process in my K8s home in Github.

Reference: My home little cluster

 

Comments

Popular posts from this blog

Kubernetes at home