Last9 MCP
Connect your AI agent to production observability data for intelligent debugging and issue resolution.
Last9’s MCP server transforms your development workflow by bringing production observability directly into your IDE. Ask your AI assistant questions like “What’s causing the recent spike in errors?” or “Show me the slowest endpoints from the last hour” and get instant insights with suggested fixes.
What is Model Context Protocol?
MCP is an open protocol that standardizes how applications provide context to LLMs. Think of MCP like a universal adapter for AI applications — it provides a standardized way to connect AI models to different data sources and tools.
Using MCP, AI agents in your IDE (Cursor, Windsurf, VS Code) or Claude Desktop can access your observability data in Last9’s Telemetry Data Platform to provide intelligent assistance based on real production context.
Why use Last9 MCP?
Turn production issues into local solutions. The Last9 MCP server brings real-time production context directly to your development environment, enabling AI agents to deliver conversational observability through:
- Debug with production context: Analyze exceptions, performance issues, and service dependencies using actual production data
- Suggest intelligent fixes: Get code suggestions based on real observability signals, not just theoretical best practices
- Eliminate “works on my machine”: Bridge the gap between local development and production reality with agentic dx
Gone are the days of switching between multiple tools to understand production issues. This represents the shift from traditional monitoring to an AI-native approach. Read more in our launch blog post and our thoughts on why your observability stack needs to speak agent.
Example Use Cases
Debug Production Exceptions
"I'm seeing errors in production. Can you help me understand what's happening?"
Agent uses get_exceptions
and get_service_performance_details
to analyze the issue
Performance Investigation
"My API response times seem slow. What's causing the latency?"
Agent uses get_service_dependency_graph
and prometheus_range_query
to identify bottlenecks
Log Analysis for Issues
"Find error logs from the user-service in the last 30 minutes"
Agent uses get_logs
with service and severity filters to surface relevant logs
Prerequisites
Before setting up Last9 MCP, ensure you have:
- Observability data flowing to Last9 via OpenTelemetry integration
- One of the supported IDEs: Cursor, Windsurf, VS Code, or Claude Desktop
- Access to your Last9 authentication credentials
Setup
-
Get your Last9 credentials
You’ll need three values from your Last9 account:
- Base URL & Auth Token: Available from your GenAI integration page
- Refresh Token: Generate from API Access settings with Write permissions
-
Install the Last9 MCP server
# Add the Last9 tapbrew tap last9/tap# Install the Last9 MCP CLIbrew install last9-mcp# Install globallynpm install -g @last9/mcp-server# Or run directly with npxnpx @last9/mcp-server -
Configure your IDE
-
Open Claude Desktop → Settings → Developer
-
Click “Edit Config” to open
claude_desktop_config.json
-
Add the Last9 MCP server configuration:
{"mcpServers": {"last9": {"command": "/opt/homebrew/bin/last9-mcp","env": {"LAST9_BASE_URL": "your_otlp_host_here","LAST9_AUTH_TOKEN": "your_otlp_auth_token_here","LAST9_REFRESH_TOKEN": "your_write_refresh_token_here"}}}} -
Save the file and restart Claude Desktop
-
Open Cursor → Settings → Cursor Settings → MCP
-
Click “Add New Global MCP Server”
-
Add the configuration:
{"mcpServers": {"last9": {"command": "/opt/homebrew/bin/last9-mcp","env": {"LAST9_BASE_URL": "your_otlp_host_here","LAST9_AUTH_TOKEN": "your_otlp_auth_token_here","LAST9_REFRESH_TOKEN": "your_write_refresh_token_here"}}}} -
Save and restart Cursor
-
Open Windsurf → Settings → Developer
-
Click “Edit Config” to open
windsurf_config.json
-
Add the Last9 MCP server configuration:
{"mcpServers": {"last9": {"command": "/opt/homebrew/bin/last9-mcp","env": {"LAST9_BASE_URL": "your_otlp_host_here","LAST9_AUTH_TOKEN": "your_otlp_auth_token_here","LAST9_REFRESH_TOKEN": "your_write_refresh_token_here"}}}} -
Save the file and restart Windsurf
Note: MCP support in VS Code is available from v1.99+ and currently in preview.
-
Open VS Code → Settings → User → Features → Chat
-
Click “Edit settings.json”
-
Add the MCP configuration:
{"mcp": {"servers": {"last9": {"type": "stdio","command": "/opt/homebrew/bin/last9-mcp","env": {"LAST9_BASE_URL": "your_otlp_host_here","LAST9_AUTH_TOKEN": "your_otlp_auth_token_here","LAST9_REFRESH_TOKEN": "your_write_refresh_token_here"}}}}} -
Save and restart VS Code
-
-
Verify the connection
Once configured, your AI agent will have access to Last9 tools. Try asking: “What exceptions occurred in the last hour?” or “Show me the performance summary for my services.”
Available Tools
Your AI agent can now access these Last9 capabilities:
Observability & APM
-
get_exceptions
: Get server-side exceptions over a specified time range.View parameters
limit
(integer, optional): Maximum number of exceptions to return. Default: 20lookback_minutes
(integer, recommended): Number of minutes to look back from now. Default: 60. Examples: 60, 30, 15start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to use lookback_minutesend_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to current timespan_name
(string, optional): Name of the span to filter by
-
get_service_summary
: Get service summary over a given time range. Includes service name, environment, throughput, error rate, and response time. All values are p95 quantiles over the time range.View parameters
start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Default: end_time_iso - 1 hourend_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Default: Current timeenv
(string, optional): Environment to filter by. Default: ‘prod’
-
get_service_environments
: Get available service environments within a specified time range.View parameters
start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Default: now - 60 minutesend_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Default: Current time
Note: Returns an array of environments that can be used with other APM tools. If the array is empty, use an empty string
""
for environment parameters. -
get_service_performance_details
: Get detailed performance metrics for a specific service.View parameters
service_name
(string, required): Service namestart_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Default: Now - 60 minutesend_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Default: Current timeenv
(string, optional): Environment. Default: ‘prod’
-
get_service_operations_summary
: Get operations summary for a service like HTTP endpoints, database queries, messaging producer, and HTTP client calls.View parameters
service_name
(string, required): Service namestart_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Default: Now - 60 minutesend_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Default: Current timeenv
(string, optional): Environment. Default: ‘prod’
-
get_service_dependency_graph
: Get service dependency graph showing incoming and outgoing dependencies, including infra. Includes throughput, response times and error rates.View parameters
-service_name
(string, optional): Name of the service -start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Default: now - 60 minutes -end_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Default: Current time -env
(string, optional): Environment. Default: ‘prod’
Prometheus Integration
-
prometheus_range_query
: Execute Prometheus range queries for metrics over a time period.View parameters
-query
(string, required): Range query to execute -start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Default: now - 60 minutes -end_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Default: Current time -
prometheus_instant_query
: Execute Prometheus instant queries for metrics at a specific point in time.View parameters
-query
(string, required): Instant query to execute -time_iso
(string, optional): Time in ISO format (YYYY-MM-DD HH:MM:SS). Default: Current time -
prometheus_label_values
: Get all label values for a specific label name.View parameters
-match_query
(string, required): Valid PromQL filter query -label
(string, required): Label to get values for -start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Default: now - 60 minutes -end_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Default: Current time -
prometheus_labels
: Get all available label names.View parameters
-match_query
(string, required): Valid PromQL filter query -start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Default: now - 60 minutes -end_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Default: Current time
Log Management
-
get_logs
: Gets logs filtered by optional service name and/or severity level within a specified time range.View parameters
-service
(string, optional): Name of the service to get logs for -severity
(string, optional): Severity of the logs to get -lookback_minutes
(integer, recommended): Number of minutes to look back from now. Default: 60. Examples: 60, 30, 15 -start_time_iso
(string, optional): Start time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to use lookback_minutes -end_time_iso
(string, optional): End time in ISO format (YYYY-MM-DD HH:MM:SS). Leave empty to default to current time -limit
(integer, optional): Maximum logs to return. Default: 20 -
get_drop_rules
: Gets drop rules for logs, which determine what logs get filtered out from reaching Last9. -
add_drop_rule
: Adds a new drop rule to filter out specific logs at Last9 Control PlaneView parameters
-name
(string, required): Name of the drop rule -filters
(array, required): List of filter conditions to apply. Each filter has: -key
(string, required): The key to filter on. Only attributes and resource.attributes keys are supported. For resource attributes, use format:resource.attributes[key_name]
and for log attributes, use format:attributes[key_name]
. Double quotes in key names must be escaped -value
(string, required): The value to filter against -operator
(string, required): The operator used for filtering. Valid values: - “equals” - “not_equals” - conjunction (string, required): The logical conjunction between filters. Valid values: - “and”
Alert Management
-
get_alert_config
: Get all configured alert rules from Last9.Returns
- Alert rule ID and name - Primary monitoring indicator - Current state and severity - Alerting algorithm details - Entity and organization information - Configuration properties - Timestamps for creation/updates - Group timeseries notification settings -
get_alerts
: Get currently active alerts from the Last9 monitoring system.View parameters and returns
Parameters:
timestamp
(integer, optional): Unix timestamp. Default: current timewindow
(integer, optional): Time window in seconds. Default: 900 seconds, range: 60-86400
Returns:
- Alert rule details
- Alert state and severity
- Firing timestamps
- Rule configurations
- Metric degradation information
- Group labels and annotations
Demos
-
Fixing a recept exception
-
Optimizing logs for a service
-
Creating an RCA basis recent issues in the production environment
-
Analyze background worker processes
Best Practices
- Start broad, then narrow: Ask about overall service health before diving into specific issues
- Include time context: Specify time ranges when investigating issues (“in the last hour”, “during the outage yesterday”)
- Combine tools: The agent can correlate data across metrics, logs, traces, and alerts for comprehensive analysis
Troubleshooting
Common Issues:
- “Last9 tools not available”: Verify your IDE configuration and restart the application
- “Authentication failed”: Double-check your auth tokens and refresh token permissions
- “No data returned”: Ensure your services are sending telemetry to Last9 and try broader time ranges
Please get in touch with us on Discord or Email if you have any questions.