Last9 MCP for AI Agents
Programmatic access to Last9 MCP for AI agents and services — with Python examples for the Anthropic SDK and LangChain, plus team token management guidance.
Use Last9’s MCP server as a live data layer inside any Python AI agent — SRE copilots, incident bots, on-call assistants, or observability-aware automation pipelines.
This page covers programmatic (API-key) access for agents and services. For IDE-based developer setup (Claude Code, Cursor, VS Code, Windsurf), see Last9 MCP.
Authentication
Two modes are available:
| Mode | Best for | How it works |
|---|---|---|
| Hosted MCP + OAuth | Human developers in IDEs | Each developer authenticates via browser OAuth — no tokens to manage or rotate |
| MCP token (Bearer) | Agents, services, CI pipelines | Generate a client token; pass as Authorization: Bearer <token> in every request |
Get an MCP Token
- Go to Control Plane → Query Tokens
- Click New Token → Token Type: Client → Client Type: MCP
- Copy the token — it is shown only once
The MCP server URL is:
https://app.last9.io/api/v4/organizations/<org_slug>/mcpYour org slug is the path segment in your Last9 dashboard URL: app.last9.io/<org_slug>/...
Team Token Management
| Scenario | Recommendation |
|---|---|
| Developer IDEs | Hosted MCP — individual OAuth per developer |
| Shared SRE bot / on-call assistant | One MCP token per service identity, stored in your secrets manager |
| CI pipelines or automation scripts | One MCP token per pipeline, scoped to that pipeline’s identity |
| Multiple team members sharing one token | ⚠️ Not recommended — makes auditing harder and a single rotation disrupts everyone |
Tokens are organization-scoped and carry the permissions of the user who generated them. Rotate tokens if a team member with access leaves.
For questions about rate limits or high-volume agent workloads, contact cs@last9.io.
Python Agent Examples
Anthropic SDK (Claude)
Use the Anthropic SDK’s remote MCP support to give Claude direct access to Last9 tools. Claude automatically decides which tools to call based on the question:
import anthropic
client = anthropic.Anthropic()
response = client.beta.messages.create( model="claude-opus-4-7-20250514", max_tokens=4096, system="You are an SRE assistant. Investigate issues using Last9 observability data.", messages=[ { "role": "user", "content": "Why is the payment-service error rate elevated right now?", } ], mcp_servers=[ { "type": "url", "url": "https://app.last9.io/api/v4/organizations/<org_slug>/mcp", "name": "last9", "authorization_token": "<mcp_token>", } ], betas=["mcp-client-2025-04-04"],)
print(response.content[-1].text)Replace <org_slug> with your organization slug and <mcp_token> with an MCP-type client token from Query Tokens.
The agent calls tools like get_service_performance_details, get_exceptions, and get_service_dependency_graph as needed to answer the question.
LangChain / LangGraph
Use langchain-mcp-adapters to expose all Last9 MCP tools as LangChain tools for any LangGraph or LangChain agent:
import asynciofrom langchain_mcp_adapters.client import MultiServerMCPClientfrom langgraph.prebuilt import create_react_agent
ORG_SLUG = "<org_slug>"MCP_TOKEN = "<mcp_token>"
async def run_sre_agent(question: str) -> str: async with MultiServerMCPClient( { "last9": { "url": f"https://app.last9.io/api/v4/organizations/{ORG_SLUG}/mcp", "transport": "streamable_http", "headers": {"Authorization": f"Bearer {MCP_TOKEN}"}, } } ) as mcp_client: tools = await mcp_client.get_tools() agent = create_react_agent("anthropic:claude-opus-4-7", tools) result = await agent.ainvoke( {"messages": [{"role": "user", "content": question}]} ) return result["messages"][-1].content
if __name__ == "__main__": answer = asyncio.run( run_sre_agent("Which services had elevated error rates in the last hour?") ) print(answer)Install the required packages:
pip install langchain-mcp-adapters langgraph langchain-anthropicOpenAI Responses API
See Last9 MCP → Using with OpenAI’s Responses API for the OpenAI example with Bearer token auth.
Building an On-Call Bot
A minimal pattern for a bot that receives an alert and returns an investigation summary using Last9 data:
import osimport anthropic
client = anthropic.Anthropic()
LAST9_MCP_SERVER = { "type": "url", "url": f"https://app.last9.io/api/v4/organizations/{os.environ['LAST9_ORG_SLUG']}/mcp", "name": "last9", "authorization_token": os.environ["LAST9_MCP_TOKEN"],}
def investigate_alert(alert_text: str) -> str: """Call from your alerting webhook handler.""" response = client.beta.messages.create( model="claude-opus-4-7-20250514", max_tokens=2048, system=( "You are an on-call SRE assistant. Investigate the alert using Last9 tools. " "Return a concise summary: what is failing, probable cause, and suggested next action. " "Be direct and specific — avoid hedging." ), messages=[{"role": "user", "content": f"Alert fired: {alert_text}"}], mcp_servers=[LAST9_MCP_SERVER], betas=["mcp-client-2025-04-04"], ) return response.content[-1].text
# Example: wire into a Slack Events API handler# investigation = investigate_alert("payment-service error rate > 5% for 5 minutes")# slack_client.chat_postMessage(channel="#incidents", text=investigation)Store LAST9_ORG_SLUG and LAST9_MCP_TOKEN in your secrets manager — never hardcode them.
Key Tools for Agent Workflows
The MCP server exposes all Last9 observability tools. The most useful for agent workflows:
| Tool | What it returns |
|---|---|
get_service_performance_details | Latency p95, error rate, throughput for a service |
get_exceptions | Recent unhandled exceptions with stack traces |
get_service_dependency_graph | Upstream/downstream services with SLO signals |
get_alerts | Currently firing alert rules and severity |
get_logs | Filtered log entries by service, severity, or body |
get_change_events | Recent deployments and config changes (correlate with incidents) |
prometheus_range_query | Run any PromQL expression over your metrics |
did_you_mean | Fuzzy-match entity names to avoid empty results from typos |
For the full reference, see Available Tools on the main MCP page.
Troubleshooting
-
401 UnauthorizedCheck that:
- The
Authorizationheader isBearer <token>(notBasic, not the token alone) - The token is an MCP-type Client token from Query Tokens — not a Refresh Token or ingestion token
- The
-
Tools return empty results
The default lookback is 60 minutes. Pass a broader window when investigating older incidents:
# In tool parameters passed by the agent{"lookback_minutes": 360}If the service name returns no results, have the agent call
did_you_meanfirst to resolve the correct name. -
Rate limit errors
Each organization has per-endpoint rate limits. Avoid sharing one token across many concurrent agent instances. For high-throughput workloads, contact cs@last9.io to discuss limits.
Please get in touch with us on Discord or Email if you have any questions.