Monitor a Linux Server Easily (Grafana, Prometheus, Node Exporter, Docker Compose)
Monitor Linux server metrics such as CPU usage, memory usage, and storage space within minutes using Docker Compose, Prometheus, Node Exporter, and Grafana. Everything will be done with Grafana provisioning.
Table of Contents 📖
- How it Works
- Environment Variables
- Configuring Prometheus
- Volumes and Networks
- Prometheus Service
- Node Exporter Service
- Grafana Service
- Grafana Provisioning
- Running the Application
How it Works
The server monitoring stack will consist of 3 pieces: Node Exporter, Prometheus, and Grafana. All will be running in Docker containers spun up with Docker Compose. Prometheus will scrape Node Exporter and push the metrics to Grafana. Grafana will then display a dashboard of the metrics.
INFO: System metrics are analyzed from within Docker containers by using Docker volumes that map relevant host directories to the Node Exporter container.
- Node Exporter - Collects system-level metrics from Unix like systems and exposes them in a way that Prometheus can scrape. Example metrics are CPU usage, memory usage, and disk I/O. We can then use these metrics to monitor a system's performance and health.
- Prometheus - Prometheus is desiged to capture metrics by scraping an HTTP endpoint, known as a metrics endpoint. Metrics endpoints are the standard way for an application to expose metrics. Prometheus will periodically scrape a provided HTTP endpoint and store the data. We can then query the data to get an idea of what is going on in the application.
- Grafana - A web application that allows us to visualize data sources using graphs, charts, etc. Example data sources are Prometheus and Postgres. In this example we will use Prometheus.
Environment Variables
Lets use environment variables to define the locations of each of our Docker containers.
NODE_EXPORTER_CONTAINER=node-exporter-c
NODE_EXPORTER_PORT=9100
PROMETHEUS_CONTAINER=prometheus-c
PROMETHEUS_PORT=9090
GRAFANA_CONTAINER=grafana-c
GRAFANA_PORT=3000
Configuring Prometheus
Now lets configure Prometheus with a prometheus.yml file. This configuration will set up Prometheus to scrape the metrics from Node Exporter and the Prometheus server itself.
# This configuration file sets up Prometheus to scrape metrics from defined sources,
# including Prometheus itself and Node Exporter, with the ability to customize scrape intervals.
global:
# Sets the default scraping interval for all jobs to 15 seconds.
scrape_interval: 15s
scrape_configs:
# Configuration for scraping metrics from the Prometheus server itself.
- job_name: 'prometheus'
# Overrides the global scrape interval to 10 seconds for the Prometheus job.
scrape_interval: 10s
static_configs:
# Targets the Prometheus instance running at 'prometheus-c' on port 9090.
- targets: ['prometheus-c:9090']
# Configuration for scraping metrics from the Node Exporter, which provides system-level metrics.
- job_name: 'node-exporter'
static_configs:
# Targets the Node Exporter instance running at 'node-exporter-c' on port 9100.
- targets: ['node-exporter-c:9100']
WARNING: Note that we are not using environment variables here. This is because Prometheus does not have native environment variable support.
Volumes and Networks
Lets start off our Docker Compose configuration by defining the network and volumes. A Docker bridge network uses a software bridge to allow container communication. It also isolates containers that aren't connected to that bridge network.
# Defines the custom network used by all services in this stack
networks:
monitoring:
driver: bridge # Uses a bridge network to allow containers to communicate
# Defines the volumes for persistent data storage
volumes:
prometheus_data:
name: prometheus_data # Defines a named volume for Prometheus data
grafana_data:
name: grafana_data # Defines a named volume for Grafana data
Prometheus Service
Now lets configure the Prometheus service using Docker Compose. Here we will import our Prometheus configuration file into the container using volumes and expose the default Prometheus port of 9090 from within the Docker network. We also comment out the prometheus_data volume as we don't want to persist our Prometheus data for this demonstration.
# Service definition for Prometheus (collects and stores metrics from Node Exporter)
prometheus:
image: prom/prometheus:latest # Specifies the Docker image for Prometheus
container_name: ${PROMETHEUS_CONTAINER} # Container name taken from .env file
restart: always # Ensures the container always restarts if it crashes
env_file: .env # Loads environment variables from the .env file
volumes:
- ./config/prometheus.yml:/etc/prometheus/prometheus.yml # Mounts the local Prometheus configuration file into the container
# - prometheus_data:/prometheus # (Optional) Mounts a volume for persistent Prometheus data storage
expose:
- ${PROMETHEUS_PORT} # Exposes the Prometheus port defined in .env file
networks:
- monitoring # Connects Prometheus to the "monitoring" network
depends_on:
- node-exporter # Ensures Node Exporter starts before Prometheus
INFO: The expose key only exposes the container inside the Docker network, not outside of it.
Node Exporter Service
For the Node Exporter service we map several host volumes to the container to collect system-level metrics. These are the root directory, sys, and proc. We also specify them as read-only just to ensure they are not modified on our host machine.
- /sys - provides an interface to the kernel to expose system related information.
- /proc - provides an interface to the kernel to expose process related information.
- / - is the root filesystem of the host machine, contains all directories and files on the system.
# Service definition for Node Exporter (collects system metrics)
node-exporter:
image: prom/node-exporter:latest # Specifies the Docker image for Node Exporter
container_name: ${NODE_EXPORTER_CONTAINER} # Container name taken from .env file
restart: always # Ensures the container always restarts if it crashes
env_file: .env # Loads environment variables from the .env file
volumes:
- /proc:/host/proc:ro # Mounts the /proc directory from the host into the container (read-only)
- /sys:/host/sys:ro # Mounts the /sys directory from the host into the container (read-only)
- /:/rootfs:ro # Mounts the root filesystem of the host into the container (read-only)
expose:
- ${NODE_EXPORTER_PORT} # Exposes the Node Exporter port defined in .env file
networks:
- monitoring # Connects Node Exporter to the "monitoring" network
Grafana Service
For our Grafana service, we want to map the port to the host machine so we can access the web application. The volumes are used to import premade Grafana configuration files. The datasources.yml file will connect our Prometheus data source and the dashboard.yaml file will fill our a pre-made Grafana dashboard.
# Service definition for Grafana (visualizes the metrics collected by Prometheus)
grafana:
image: grafana/grafana # Specifies the Docker image for Grafana
container_name: ${GRAFANA_CONTAINER} # Container name taken from .env file
restart: always # Ensures the container always restarts if it crashes
ports:
- ${GRAFANA_PORT}:${GRAFANA_PORT} # Maps the Grafana port from the container to the host
env_file: .env # Loads environment variables from the .env file
volumes:
- ./config/datasources.yml:/etc/grafana/provisioning/datasources/datasources.yml # Mounts the Grafana data source configuration file
- ./config/dashboard.yaml:/etc/grafana/provisioning/dashboards/dashboard.yaml # Mounts the Grafana dashboard configuration file
- ./config/dashboard.json:/var/lib/grafana/dashboards/dashboard.json # Mounts a Grafana dashboard definition file
# - grafana_data:/var/lib/grafana # (Optional) Mounts a volume for persistent Grafana data storage
networks:
- monitoring # Connects Grafana to the "monitoring" network
depends_on:
- prometheus # Ensures Prometheus starts before Grafana
Grafana Provisioning
To make life easier, we will use provisioning to configure Grafana as opposed to going in and adding everything manually. Below are the configuration files for getting a dashboard and datasource set up.
# Grafana Datasources Configuration
# This file defines the data sources that Grafana will use to fetch data for dashboards.
apiVersion: 1 # Specifies the API version for this configuration.
datasources: # List of data sources to be configured
- name: Prometheus # The name of the data source, as it will appear in Grafana.
type: prometheus # The type of the data source, indicating it is a Prometheus instance.
access: proxy # Access mode; 'proxy' means requests will go through the Grafana server.
# Access mode - 'proxy' means requests are made from the Grafana server,
# while 'direct' would mean requests are made from the user's browser.
url: http://${PROMETHEUS_CONTAINER}:${PROMETHEUS_PORT} # The URL where the Prometheus server can be accessed.
# This uses environment variables for dynamic configuration of the container name and port.
# Grafana Dashboard Provider Configuration
# This file defines how Grafana will manage and load dashboards.
apiVersion: 1 # Specifies the API version for this configuration.
providers: # List of dashboard providers
- name: "Dashboard provider" # A descriptive name for the dashboard provider.
orgId: 1 # The organization ID; typically '1' for the default organization in Grafana.
type: file # Specifies that the type of provider is 'file', indicating dashboards are loaded from the filesystem.
disableDeletion: false # If true, prevents deletion of dashboards from this provider via the UI.
updateIntervalSeconds: 10 # How often (in seconds) Grafana will check for updates to the dashboards.
allowUiUpdates: false # If true, allows updates to dashboards through the Grafana UI.
options: # Additional options for the provider
path: /var/lib/grafana/dashboards # The file system path where dashboard files are located.
foldersFromFilesStructure: true # If true, enables the creation of folders based on the file structure of the dashboard files.
Lets also use a premade dashboard file for Node Exporter. The one we will be using is from the following URL: https://grafana.com/grafana/dashboards/1860-node-exporter-full/. This dashboard is called node-exporter-full and provides everything we need to get our monitoring stack up and running quickly. Download it and save it as dashboard.json. Our Grafana Docker Compose service maps it to the /var/lib/grafana/dashboard location in the container.
Running the Application
To get the application up and running navigate to the top level of the project and run docker compose up.
docker compose up
Then visit the Grafana web UI by navigating to localhost:3000. From there sign in with the default Grafana credentials (username: admin, password: admin) and navigate to the premade dashboard.