Skip to content
Last9
Book demo

n8n — Last9 MCP

Give n8n's AI Agent node access to your production telemetry via Last9's MCP server — query exceptions, traces, logs, and metrics from inside any workflow.

n8n’s AI Agent node accepts MCP Client Tools as attachable tool nodes. Point one at Last9’s MCP server and the agent gets the full Last9 toolset — exceptions, logs, traces, PromQL queries, alerts, change events — usable inside any workflow.

The classic shape: an alert webhook fires → an AI Agent node investigates with Last9 tools → the result posts to Slack. No human in the loop until the agent runs out of confidence.

Prerequisites

  • n8n 1.22+ (when MCP Client Tool support was added)
  • A Last9 org slug and MCP token — from Query Tokens (Token Type: Client → Client Type: MCP)
  • An LLM provider credential configured in n8n (Anthropic, OpenAI, etc.)

Setup

  1. Enable MCP tools in n8n

    Set this environment variable on your n8n instance and restart:

    N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true

    Without it, the MCP Client Tool node won’t appear as an attachable tool in the AI Agent node.

    If you’re running Docker Compose, add it to your environment block:

    environment:
    N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE: "true"
  2. Get your Last9 org slug and create an MCP token

    • Org slug — the path segment right after app.last9.io/ in your browser (e.g. https://app.last9.io/acme-corp/...acme-corp)
    • MCP token — from Query Tokens (admin required):
      1. Click New Token
      2. Set Token TypeClient
      3. Set Client TypeMCP
      4. Give it a name (e.g. n8n alert triage)
      5. Click Create Token and copy the value
  3. Add a Bearer Auth credential in n8n

    In n8n settings → CredentialsNew CredentialHTTP Bearer Auth:

    FieldValue
    TokenThe MCP token you created above

    Name it something recognizable, e.g. Last9 MCP Token.

  4. Build a workflow with AI Agent + Last9 MCP

    In the n8n editor:

    • Create or open a workflow

    • Add an AI Agent node

    • Attach an LLM to the agent (e.g. Anthropic Chat Model or OpenAI Chat Model)

    • Click the Tools input socket on the AI Agent node and add MCP Client Tool

    • Configure the MCP Client Tool:

      FieldValue
      TransportHTTP Streamable (preferred) or SSE
      Endpointhttps://app.last9.io/api/v4/organizations/<org_slug>/mcp
      AuthenticationBearer Auth
      CredentialSelect the Last9 credential you created above
      Tools to IncludeAll (or scope to specific tool names)
  5. Test it

    Add a Manual Trigger node, connect it to the AI Agent, and click Test workflow. Set the agent’s prompt to something that requires Last9 data:

    What exceptions has the payment-service thrown in the last 30 minutes?
    Group by exception type and include the most recent stack trace for each.

    The agent will call get_exceptions, receive the data from Last9, and write the summary into the workflow output. From there you can fan out to Slack, Linear, PagerDuty, or any other node.

Workflow patterns

Alert triage bot

A webhook from your alerting system triggers the workflow. The AI Agent uses Last9 tools to fetch recent exceptions, check for recent deploys via change events, and look up trace error rates. It posts a structured triage summary to Slack with suspected root causes. Time-to-context drops from minutes to seconds.

Incoming alert: payment-service latency p99 > 2s
Investigate this alert using Last9. Check:
1. Recent exceptions in payment-service
2. Any change events (deploys, config changes) in the last 2 hours
3. Downstream service error rates
Summarize your findings and rate your confidence (low/medium/high)
that you've identified the root cause.

Daily ops digest

A cron trigger runs at 9 AM. The agent asks “which services degraded in the last 24 hours?” and sends the summary to the on-call Slack channel before standup.

Deploy postmortem helper

A GitHub webhook fires after a deploy. The workflow waits 15 minutes, then the agent compares pre/post error rates and latency using PromQL tools. It posts a verdict: thumbs-up or “investigate this” with specifics.

On-call escalation filter

An alert fires → agent checks if the anomaly is a known flap pattern using historical trace data → if it looks real, pages the on-call engineer; if it’s a known pattern, acknowledges and closes the alert automatically.

Available Last9 MCP tools

The MCP server exposes these tool categories:

CategoryTools
Services & APMget_service_summary, get_service_performance_details, get_service_operations_summary, get_service_dependency_graph, get_service_environments
Exceptionsget_exceptions
Tracesget_traces, get_service_traces, get_trace_attributes
Logsget_logs, get_service_logs, get_log_attributes, get_drop_rules, add_drop_rule
Metrics (PromQL)prometheus_range_query, prometheus_instant_query, prometheus_labels, prometheus_label_values
Alertsget_alerts, get_alert_config
Change Eventsget_change_events
Databasesget_databases, get_database_queries, get_database_slow_queries, get_database_server_metrics
Notificationsget_notification_channels

The agent picks the right tools automatically based on your prompt. You can constrain which tools are available by listing specific names in Tools to Include on the MCP Client Tool node.

Complete working example

An importable n8n workflow JSON and Docker Compose setup are available in the opentelemetry-examples repository under n8n-mcp/. The workflow covers the alert triage pattern end-to-end.

Troubleshooting

MCP Client Tool not selectable on the AI Agent node. You missed N8N_COMMUNITY_PACKAGES_ALLOW_TOOL_USAGE=true or didn’t restart n8n after setting it.

401 from Last9. The Bearer token must be a Client → MCP token from Query Tokens. API keys from the API Access page, Refresh Tokens, and session tokens won’t work on the MCP endpoint.

Wrong org slug. The slug is the path segment immediately after app.last9.io/ — not your team name, display name, or email address. Check your browser URL bar.

Agent calls no tools. Prompt the agent more specifically. Open-ended prompts like “how is my system doing?” may not trigger tool use. Prompts that name a specific service, time window, or signal type work better.

HTTP Streamable transport fails; SSE works. Some reverse proxies buffer responses and break SSE-over-HTTP streaming. Try SSE transport as a fallback, or configure your proxy to pass through Transfer-Encoding: chunked.

Please get in touch with us on Discord or Email if you have any questions.