Change the logging level using ConfigMap

Kajal Rawal
3 min readDec 14, 2020

Logs are often known as one of the three pillars of observability. They are great for debugging code running in production. However they can also create a lot of data and noise. You want to have deep insight into your application but at the same time you only care about the information that matters. Thus logging libraries have many logging levels, e.g.

ConfigMap
  • panic (5)
  • fatal (4)
  • error (3)
  • warn (2)
  • info (1)
  • debug (0)
  • trace (-1)

You specify your global logging level and all logs with this level and greater will show up. For example, if you set your global logging level to warn (2) all logs with the levels error (3), fatal (4) and panic (5) will show up. That's great for production code because you can log only the most important things during normal operation. In case you've got a problem and have to dig deeper simply lower the logging level and all info and debug logs appear as well.

So how do you change the logging level for your production application that is running in a Kubernetes cluster?

For this post we’re using Go but the principles can be applied to any programming language. We assume you have multiple replicas of your application running to ensure zero downtime deployments. We’re using a ConfigMap to store the current logging level. This ConfigMap is then translated into environment variables that the application reads during startup.

Save this very simple ConfigMap to a file called configmap.yaml

apiVersion: v1
kind: ConfigMap
metadata:
name: configuration
namespace: default
data:
log_level: info

Move it into your cluster by using kubectl

$ kubectl apply -f configmap.yaml

It should appear on your Kubernetes dashboard. Check your dashboard

At the moment our application doesn’t know anything about this ConfigMap. We have to connect our application to it by adding some information to the deployment. Here we have a basic deployment and we add all the data starting at env:

apiVersion: apps/v1
kind: Deployment
spec:
template:
spec:
containers:
- name: app
env:
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: configuration
key: log_level

It’s really straightforward. We specify a new environment variable LOG_LEVEL that reads the value from our ConfigMap named configuration. Since our ConfigMap might have multiple key/value pairs we tell the deployment to use the log_level key. Now we have the environment variable LOG_LEVEL with the content info

Here is a little diagram to illustrate the whole concept

Logging using ConfigMap

Now everything is ready and you’re able to change the logging level on the fly. Simply change the content for the log_level key in your ConfigMap and apply it to the cluster. Then initiate a rolling restart so your pods will pick up the new environment variables. We're using make for automation.

On our local machines we keep the application outside the Kubernetes cluster. It makes debugging easier and restarts during development much faster since you don’t have to create new containers. We simply use make to set the environment variable.

JAVA_OPTS -e "-Dlogging.config=<path for logback>\logback-spring.xml"Run your application and change path accordingly in configMap for Env Variable of JAVA_OPTS

Now, Every time if there’s changes in Logback.xml then configMap detects it and update accordingly log level in Application.

You can use following property in logback.xml for auto scan feature i.e.

<configuration debug=”true” scan=”true” scanPeriod=”10 seconds”>

You can mount configMap as Volume also. Check out my blog for that.

--

--

Kajal Rawal

Programming isn’t about what you know; it’s about what you can figure out.