Skip to content
Last9 named a Gartner Cool Vendor in AI for SRE Observability for 2025! Read more →
Last9

PostgreSQL

Monitor PostgreSQL database performance and metrics with OpenTelemetry and Last9

Use OpenTelemetry to instrument your PostgreSQL database and send telemetry data to Last9. This integration provides comprehensive monitoring of database performance, slow queries, connection metrics, and system resource utilization.

Read the sample application for more details.

Prerequisites

Before setting up PostgreSQL monitoring, ensure you have:

  • Docker and Docker Compose installed on your system
  • Access to your PostgreSQL database with appropriate permissions
  • Last9 account with integration credentials
  1. Verify Docker Installation

    Check that Docker and Docker Compose are properly installed:

    # Check Docker installation
    docker --version
    # Check Docker Compose installation
    docker compose version
  2. Configure OpenTelemetry Collector

    Create an otel-collector-config.yaml file with the following configuration. This defines the Prometheus receiver for scraping Postgres metrics, processors for batch processing and resource detection, and the Last9 exporter configuration:

    receivers:
    prometheus:
    config:
    scrape_configs:
    - job_name: postgres-exporter
    scrape_interval: 60s
    static_configs:
    - targets: ["postgres-exporter:9187"]
    processors:
    batch:
    timeout: 10s
    send_batch_size: 10000
    resourcedetection:
    detectors: [env, system, docker, ec2, azure, gcp]
    timeout: 2s
    resource:
    attributes:
    - key: db_name
    value: postgres-db
    action: upsert
    - key: deployment.environment
    value: dev
    action: upsert
    exporters:
    otlp/last9:
    endpoint: "{{ .Logs.WriteURL }}"
    headers:
    "Authorization": "{{ .Logs.AuthValue }}"
    debug:
    verbosity: detailed
    service:
    pipelines:
    metrics:
    receivers: [prometheus]
    processors: [resourcedetection, resource, batch]
    exporters: [otlp/last9]
    telemetry:
    logs:
    level: info

    Replace the placeholder values in the exporters section with your actual Last9 credentials from the Last9 Integrations page.

  3. Configure Custom Database Queries

    Create a queries.yaml file to define custom metrics for monitoring slow queries and database performance:

    slow_queries:
    query: |
    SELECT
    pid,
    datname AS database,
    usename AS user,
    application_name,
    client_addr,
    EXTRACT(EPOCH FROM (now() - query_start)) AS query_time_seconds,
    REGEXP_REPLACE(SUBSTRING(query, 1, 500), E'[\n\r]+', ' ', 'g') AS query,
    state,
    wait_event_type,
    wait_event,
    backend_type,
    pg_blocking_pids(pid) AS blocked_by
    FROM pg_stat_activity
    WHERE (now() - query_start) > interval '1 minute'
    AND state <> 'idle'
    AND query NOT ILIKE '%pg_stat%'
    AND query NOT ILIKE '%pg_catalog%'
    ORDER BY query_time_seconds DESC
    metrics:
    - pid:
    usage: "LABEL"
    description: "Process ID"
    - database:
    usage: "LABEL"
    description: "Database name"
    - user:
    usage: "LABEL"
    description: "Username"
    - application_name:
    usage: "LABEL"
    description: "Application name"
    - client_addr:
    usage: "LABEL"
    description: "Client address"
    - query_time_seconds:
    usage: "GAUGE"
    description: "Query execution time in seconds"
    - query:
    usage: "LABEL"
    description: "Query text (first 500 chars)"
    - state:
    usage: "LABEL"
    description: "Query state"
    - wait_event_type:
    usage: "LABEL"
    description: "Type of event the process is waiting for"
    - wait_event:
    usage: "LABEL"
    description: "Name of the event the process is waiting for"
    - backend_type:
    usage: "LABEL"
    description: "Type of backend"
    - blocked_by:
    usage: "LABEL"
    description: "PIDs of sessions blocking this query"
  4. Set Up Docker Compose Configuration

    Create a docker-compose.yaml file with your PostgreSQL database connection details:

    services:
    postgres-exporter:
    image: quay.io/prometheuscommunity/postgres-exporter
    environment:
    - DATA_SOURCE_URI=<DB_HOST>/<DB_NAME>
    - DATA_SOURCE_USER=<DB_USER>
    - DATA_SOURCE_PASS=<DB_PASSWORD>
    volumes:
    - ./queries.yaml:/queries.yaml
    command: --extend.query-path="/queries.yaml"
    restart: unless-stopped
    ports:
    - "9187:9187"
    otel-collector:
    image: otel/opentelemetry-collector-contrib:0.118.0
    volumes:
    - ./otel-collector-config.yaml:/etc/otel/collector/config.yaml
    command: --config=/etc/otel/collector/config.yaml
    depends_on:
    - postgres-exporter

    Replace the following placeholders with your actual PostgreSQL database information:

    • <DB_HOST>: Your PostgreSQL database host
    • <DB_NAME>: Your PostgreSQL database name
    • <DB_USER>: Your PostgreSQL database username
    • <DB_PASSWORD>: Your PostgreSQL database password
  5. Start the Monitoring Stack

    Launch the monitoring services using Docker Compose:

    docker compose -f docker-compose.yaml up -d

    This command starts:

    • Postgres Exporter: Collects metrics from your PostgreSQL database
    • OpenTelemetry Collector: Receives metrics from Postgres Exporter and forwards them to Last9

Understanding the Setup

Postgres Exporter

The Postgres Exporter connects to your PostgreSQL database and exposes metrics in Prometheus format. It’s configured to:

  • Connect to your database using the provided credentials
  • Use custom queries defined in queries.yaml
  • Expose metrics on port 9187

Custom Queries

The queries.yaml file defines custom metrics to collect from PostgreSQL. The example includes a slow_queries metric that:

  • Identifies queries running longer than 1 minute
  • Collects detailed information about these queries including:
    • Process ID and database name
    • Username and application name
    • Query text and execution time
    • Wait events and blocking processes

OpenTelemetry Collector

The OpenTelemetry Collector is configured to:

  • Scrape metrics from Postgres Exporter every 60 seconds
  • Add resource attributes like database name and environment
  • Process metrics in batches for efficient transmission
  • Export metrics to Last9 using the OTLP protocol

Verification

  1. Check Container Status

    Verify that both containers are running successfully:

    docker ps
  2. Test Postgres Exporter Metrics

    Ensure the Postgres Exporter is collecting metrics properly:

    curl http://localhost:9187/metrics
  3. Review OpenTelemetry Collector Logs

    Check the collector logs for any configuration issues:

    docker logs otel-collector
  4. Verify Metrics in Last9

    Log into your Last9 account and verify metrics are being received in Grafana.

Troubleshooting

Container Issues

If you’re experiencing problems with the containers:

# Check container status
docker ps -a
# View container logs
docker logs postgres-exporter
docker logs otel-collector

Connection Issues

For database connection problems, check the Postgres Exporter logs:

docker logs postgres-exporter

Common connection issues include:

  • Incorrect database credentials
  • Network connectivity problems
  • PostgreSQL not allowing connections from the exporter

OpenTelemetry Collector Issues

For collector configuration problems:

# Check configuration
docker exec otel-collector cat /etc/otel/collector/config.yaml
# Restart collector
docker compose restart otel-collector

Need Help?

If you encounter any issues or have questions: