A taste of Prometheus

Chloe Wang
6 min readJul 9, 2021

--

This blog is for Readers who want to have a quick taste of Prometheus, the CNCF(Cloud Native Computing Foundation) most popular project for monitoring and alerting. Engineers who want to know the basic usages of Prometheus, like pull metrics from node_exporter and the instrumented Python Web Application.

Objectives

  • Understand Prometheus architecture at high level;
  • Run Prometheus Server in a Docker container; Check the metrics Prometheus itself exposed;
  • Run node_exporter in a Docker container; Use Prometheus to monitor node_exporter’s metrics;
  • Instrument Python web application with Prometheus client library. Use Prometheus to monitor Python application’s metrics;

This blog will be short and sweet! Let’s get started.

Prometheus in a nutshell

Prometheus is an open source systems for monitoring and alerting. Prometheus joined CNCF in 2016 as the second hosted project, after Kubernetes(todo, add link). The Prometheus ecosystem consists of multiple components. The document on prometheus.io has detailed description of its features and components. The architect below is a simpler version borrowed from official document webpage.

Prometheus Architecture

In this blog, we will mainly focus on three components(boxes filled with yellow color in above diagram): Prometheus Server, Jobs/Exporters and Prometheus web UI. This blog will show how to set up Prometheus Server to pull metrics from Jobs/Exporters and use Prometheus web UI to query the metrics and visualize in both table and graph modes. You will also see how to monitor:

  • Jobs across Docker containers: Prometheus Server to node_exporter
  • From docker container(Prometheus Server) to application runs on local machine(a Python web application)

What are Job, Exporter, Instance and Target? Check Prometheus Glossary here.

Run Prometheus Server

  1. Pull Prometheus Docker image from dockerhub
$ docker pull prom/prometheus

2. Run the Prometheus Server

$ docker run --name prom -d -p 127.0.0.1:9090:9090 prom/prometheus

3. Open Prometheus web UI webpage. Confirm Prometheus target is up by navigating to “Status” -> “Targets”, the Prometheus Endpoint is shown as a target as below

Prometheus Target on Target Page

4. Execute PromQL to see the metrics: type up and click “Execute”, you can see job(s) are running:

Table view of `up` query

you should see at least the instance of “localhost:9090” with job name as “prometheus”(in yellow box above). In my case, there are 3 jobs are up.

Check the Graph(by click “Graph” tab next to “Table”) view:

Graph view of `up` query

Use node_exporter

If you recall the architecture diagram, Prometheus Server pulls metrics from Exporter and Jobs. Exporter is a binary running alongside the application you want to obtain metric from[1]. Why needs Exporter? Because sometimes it is not feasible to instrument the application. For example, the Operating System kernel’s or Hardware’s metrics.

  1. Pull the node_exporter Docker image from dockerhub
$ docker pull prom/node-exporter

2. Run the node-exporter

$ docker run --name prom-node-exporter -d -p 0.0.0.0:9100:9100 prom/node-exporter

3. Confirm node-exporter is exposing metrics on http://localhost:9100/metrics. You will see many metrics for CPU and Disk usage.

4. Change Prometheus Server’s config file in order to use Prometheus web UI to monitor the OS and Hardware’s metrics. Since our Prometheus Server is in a Docker container, use docker exec to open the shell in a running container. Then open the config file in an editor, i.e. vi in my case:

$ docker exec -it <Prometheus Server container-id> sh
/prometheus $ vi ../etc/prometheus/prometheus.yml

modify the settings by appending node-exporter as a job. Please note that node-exporter and Prometheus Server are running in two containers, so the target’s host name is not “localhost” or “0.0.0.0”, but `host.docker.internal`.

...
- job_name: node_exporter
static_configs:
- targets:
- host.docker.internal:9100
...

The prometheus.yaml file for this blog can be found here.

5. Restart Prometheus Server, check the Targets page again, you will see node-exporter is up and running

Node_exporter Target on Target Page

the same as Prometheus server in last section, the metrics exposed by node-exporter are available for query too:

node-exporter’s metrics

Try and play with it! Monitoring the CPU and Disk usage of your local machine!

Instrument Python web application with Prometheus client library

In this section, a Python web application will be instrumented with Prometheus client library in Python. The client library lets you define expose internal metrics you are interested via an HTTP endpoint on your application’s instance. The client libraries implement the Prometheus metric types: Counter, Gauge, Histogram and Summary. To keep the blog short and sweet, the snippet below only shows Gauge to measure the request in progress:

please check my comments from (1) to (8). There are two options for instrumentation, (5.1) by decorating the function or (5.2) by calling methods inc() and dec().

After above instrumentation

  1. Run the app. Please make sure you have installed Python Prometheus client library.
$ python prom-gauge.py

2. Confirm the metrics is exposing at http://localhost:8001/metrics

`app_request_inprogess` shows 1.0 when request is processing

3. Change Prometheus Server’s config file in order to use Prometheus web UI to monitor the Python web application’s metrics. Since our Prometheus Server is in a Docker container, use docker exec to open the shell in a running container. Then open the config file in an editor, i.e. vi in my case:

$ docker exec -it <Prometheus Server container-id> sh/prometheus $ vi ../etc/prometheus/prometheus.yml

modify the settings by appending “prom_python_app” as a job. Please note that Python web application is running on local machine and Prometheus Server is running in Docker container, so host name for the target’s host name is not “localhost” or “0.0.0.0”, but `docker.for.mac.localhost`

...
- job_name: prom_python_app
static_configs:
- targets:
- docker.for.mac.localhost:8001
...

The prometheus.yaml file for this blog can be found here.

4. Restart Prometheus Server, check the Targets page again, you will see prom_python_app is up and running

Prom_python_app Target on Target Page

5. Check the Graph mode by querying app_request_inprogress

Graph view of `app_request_inprogress` query

Thank you for reading my blog! Hope you enjoy this blog, and you get a quick taste of Prometheus. I learned most of them from the Udemy class[2], highly recommend this class to you all!

Next step for me is diving deeper into Prometheus. I would like to know how service discovery works and how does kube-state-metrics expose metrics to Prometheus. Stay tuned~

References:

[1] https://prometheus.io/docs/introduction/glossary/#exporter

[2] Udemy: Prometheus | The Complete Hands-On for Monitoring & Alerting

--

--

Chloe Wang
Chloe Wang

Written by Chloe Wang

Senior Technical Staff Member in IBM Finance and Operation team. A big fan and practitioner of applied machine learning and distributed cloud computing.

No responses yet