AI Assistant
Ask questions about your infrastructure in plain English and get instant insights from logs, metrics, traces, and alerts.
Last9’s AI Assistant brings conversational observability directly into your workflow. Ask questions like “What’s causing the recent spike in errors?” or “Show me the system health overview” and get instant, actionable insights from your telemetry data.
Access the AI Assistant in multiple ways:
- In the Last9 dashboard for detailed investigation
- In Slack via @mentions for team collaboration during incidents
- In your IDE through the Last9 MCP for debugging while coding

AI Assistant vs Last9 MCP vs Slack App
Last9 offers three complementary AI-powered interfaces for working with your observability data:
| AI Assistant | Last9 MCP | Slack App | |
|---|---|---|---|
| Where | Built into the Last9 dashboard | Your IDE (Cursor, VS Code, Windsurf, Claude Desktop) | Your Slack workspace |
| Best for | Investigating incidents, reviewing system health, exploring alerts | Debugging production issues while coding, correlating code changes with production | Team collaboration, quick queries during incidents in Slack |
| How it works | Natural language queries against your telemetry data in the browser | MCP protocol connects your AI coding agent to Last9’s observability APIs | @mention @Last9 in any channel to query observability data |
The AI Assistant is ideal when you’re in the Last9 dashboard and want to quickly interrogate your infrastructure — check system health, investigate errors, or review alerts without writing queries.
Last9 MCP brings production context into your development environment. When you’re debugging code in your IDE, MCP lets your AI coding agent query exceptions, logs, traces, and metrics without leaving the editor.
The Slack App enables team-wide collaboration on incidents directly in Slack. When an alert fires in your incident channel, anyone can @mention @Last9 to investigate without context switching.
Use them together: respond to alerts in Slack with @Last9, investigate deeply in the dashboard with the AI Assistant, then switch to MCP in your IDE when you’re ready to write the fix.
Key Features
Natural Language Queries
Ask questions in plain English about your infrastructure:
- “What errors are happening currently?”
- “Show me the p95 latency for my services”
- “Are there any 5xx errors?”
- “Analyze the performance of my API”
Quick Actions
The AI Assistant provides quick action cards to help you get started:
| Quick Action | Description |
|---|---|
| Recent Errors | Latest error patterns and incidents across your services |
| Performance | P95 latency and response metrics for your applications |
| Active Alerts | Current system alerts status and firing alerts |
| Database Health | Performance and usage insights for your databases |
| Trace Analysis | Identify slow request patterns and bottlenecks |
| System Overview | Comprehensive health report across all services |
Query Execution Progress
When you ask a question, the AI Assistant shows real-time progress as it queries your observability data:
- Query Logs - Searching log data for relevant information
- Query Exceptions - Analyzing application exceptions and errors
- Query Metrics - Fetching performance metrics
- Query Traces - Examining distributed traces
Each step shows a completion status, so you can see exactly what data sources are being analyzed.
Deep Links
Every query the AI Assistant executes includes a View in … link that takes you directly to the underlying data in Last9.

This allows you to:
- Continue your investigation with the full query interface
- Explore the underlying data in more detail
- Share specific queries with your team
- Build on the AI-generated queries with additional filters
Deep links preserve the exact query, time range, and filters, making it easy to transition from conversational exploration to detailed analysis.
Intelligent Analysis
The AI Assistant doesn’t just return raw data—it provides intelligent analysis including:
- Service Health Tables: Visual overview of all services with throughput, error rates, response times, and health status
- Key Findings: Automatically identified critical issues and positive indicators
- Structured Error Details: Service name, environment, error type, and error messages for quick diagnosis
- Alert Configuration Summary: Overview of configured alert rules and active instances
- Recommended Actions: Prioritized action items (Immediate, High Priority, Monitor, Review)
Getting Started
-
Navigate to AI Assistant
Click on AI Assistant in the left sidebar of your Last9 dashboard, or click the AI Assistant card on the Home page.
-
Ask a question or use Quick Actions
Type your question in the input field, or click one of the quick action cards to get started immediately.
-
Review the analysis
The assistant will gather information from your observability data and present:
- Service health status
- Active alerts and their severity
- Key findings and recommendations
- Actionable next steps
-
Continue the conversation
Ask follow-up questions to dive deeper into specific services, alerts, or issues. The assistant maintains context throughout your conversation.
Example Use Cases
System Health Overview
"Give me a system health overview"The assistant provides:
- Available environments (Production, Staging, etc.)
- Service health table with throughput, error rates, and response times
- Critical issues with firing alerts
- Positive indicators (no exceptions, healthy services)
- Recommended actions prioritized by urgency
Error Investigation
"Which errors are happening currently?"The assistant analyzes:
- Recent exceptions across services
- Error patterns and frequencies
- Affected endpoints and services
- Suggested investigation steps
Performance Analysis
"What is the p95 latency for my services?"The assistant returns:
- P95 response times for each service
- Services exceeding thresholds (marked as Slow or Critical)
- Performance degradation details
- Comparison with configured thresholds
Alert Review
"Show me active alerts"The assistant displays:
- Currently firing alerts with severity levels
- Alert duration and trigger times
- Threshold values vs current values
- Recommended remediation steps
Chat History
Your conversations are saved in the chat history sidebar, allowing you to:
- Resume previous investigations
- Reference past analyses
- Track recurring issues over time
To start a fresh conversation, click the + New chat button.
Ask Mode in Logs and Traces
In addition to the dedicated AI Assistant, you can use natural language queries directly within the Logs and Traces explorers through Ask Mode.
Using Ask Mode
Navigate to Logs Explorer or Traces Explorer and click the Ask tab to access AI-powered querying.

Type your question in natural language, such as:
- “investigate slow requests from last9 api”
- “show me errors from the payment service”
- “find database queries taking more than 1 second”
The AI will generate the appropriate filters and display matching results.

Quick Start Templates
Ask Mode provides pre-built templates to help you get started quickly:
| Template | Description |
|---|---|
| Error Traces | Show all traces with error status codes |
| Slow Traces | Find traces with duration greater than 1 second |
| Database Queries | Filter traces containing database operations |
| HTTP Requests | Show traces with HTTP status codes and request paths |
| Failed Requests | View traces with 4xx and 5xx status codes |
| Service Errors | Display traces from specific service with errors |
Team Queries
Ask Mode also displays saved queries from your team, making it easy to reuse common investigation patterns across your organization.
Using AI Assistant in Slack
Connect Last9 AI Assistant to your Slack workspace to query observability data directly from Slack channels using @mentions. This brings AI-powered insights into your team’s existing incident response workflows without leaving Slack.
Prerequisites
Before installing the Slack app, ensure you have:
- AI Assistant enabled for your Last9 organization. Request access from the AI Assistant page if not already enabled
- Observability data flowing to Last9 (metrics, logs, traces, or events)
- Slack workspace permissions to install apps
- Last9 user account with the same email as your Slack account (required for authorization)
Installation
-
Open the installation URL
-
Authorize the app
- Verify the Slack workspace is correct
- Review the requested permissions:
- Read @mentions in channels
- Send messages to channels
- Read basic user information
- Click Allow to authorize Last9 in your Slack workspace
-
Invite the bot to channels
After installation, invite
@Last9to the channel(s) where you want to use it:/invite @Last9 -
Verify installation
Test by mentioning
@Last9with no query text. You should receive a helper prompt describing available queries.
Using the Slack App
Query Format
Mention @Last9 followed by your question in any channel where the bot is present:
@Last9 [your question]Example Queries
Error Investigation:
@Last9 Show me errors for cloudflare in the last hour@Last9 Which services have 5xx errors?@Last9 What's causing the recent spike in errors?Performance Analysis:
@Last9 Get endpoints with 5xx responses in the last hour@Last9 What is the p95 latency for my services?@Last9 Show me slow database queriesSystem Health:
@Last9 Give me a system health overview@Last9 What alerts are currently firing?@Last9 Show me active incidentsResponse Behavior
- Threaded replies: All responses appear in threaded replies to keep channels organized
- Contextual follow-ups: Ask follow-up questions in the same thread to maintain context
- Deep links: Responses include “View in Logs”, “View in Traces”, or “View in Exceptions” links to the Last9 dashboard for detailed investigation
- Structured output: Results are formatted with tables, bullet points, and sections for easy scanning

Channel Configuration
Public Channels:
Mention @Last9 directly in any public channel where the bot is a member.
Private Channels:
- Invite the bot to the private channel first:
/invite @Last9
- Then use
@Last9mentions as normal
Team Collaboration
The Slack app enables collaborative incident investigation:
- Shared context: Everyone in the thread sees the same analysis
- Team visibility: Questions and answers are visible to all channel members
- Async investigation: Team members in different time zones can contribute to the same investigation thread
- Audit trail: Slack’s message history preserves the investigation timeline
Example incident workflow:
- Alert fires and posts to
#incidents - On-call engineer asks:
@Last9 Show me errors in the payment service - AI provides structured error analysis with deep links
- Team lead asks follow-up:
@Last9 When did this start? - Engineer clicks “View in Traces” to investigate root cause in dashboard
- Resolution documented in the same Slack thread
Troubleshooting
AI Assistant not responding?
- Check the AI Assistant page to verify it’s enabled for your organization
- Ensure
@Last9is invited to the channel (use/invite @Last9) - Verify the bot user appears in the channel member list
“Not authorized” error?
- Your Slack email must match your Last9 account email
- Request a Last9 account from your organization admin if you don’t have one
- Contact cs@last9.io for access issues
Responses seem incomplete?
- Try rephrasing your question with more specific details
- Include time ranges (e.g., “in the last hour”, “since 3pm”)
- Specify service or environment names if relevant
- Use the deep links in responses to continue investigation in the dashboard
Multiple workspaces:
The Slack app supports multi-workspace installations. Install separately in each workspace where you need AI Assistant access.
Best Practices
- Start broad, then narrow: Begin with general queries like “system health overview” before diving into specific services
- Include context: Mention time ranges or specific services when investigating issues
- Use follow-up questions: The assistant maintains conversation context, so ask “tell me more about that service” to dive deeper
- Review recommended actions: The AI prioritizes actions by urgency—address Immediate items first
Privacy and Security
The AI Assistant functions as a copilot for telemetry data, including logs, metrics, traces, events, dashboards, and alerts. This feature is optional and can be enabled only with explicit administrator consent within your organization’s Last9 account.
How the AI Works
The AI Assistant provides a natural language interface for querying telemetry data. When you ask questions like “Why is the 5xx error rate increasing?” or “Explain this alert,” the system uses a Large Language Model (LLM) to interpret and respond using telemetry data available within the platform.
Use of LLMs:
- The LLM converts your natural language queries into Last9’s internal query format
- No customer telemetry data is transmitted to the LLM during this translation step
- When summarization or interpretation is needed, only the minimal necessary data is processed
Data Shared with LLMs
- Limited metadata (e.g., tags, labels) when needed for query translation
- Query results only when analysis is explicitly requested
For example, if you ask “Are there any 5xx errors in the login service?”, the system:
- Executes the internal Last9 query
- Processes the response internally
- Applies sanitization and PII removal
- Uses the LLM only for high-level summarization
Model Configuration
You can choose between:
- Last9-managed models: Frontier LLMs under strict security controls and data minimization practices
- Bring Your Own LLM (BYOL): Connect your own model for complete control over AI processing
Security guarantees:
- No customer data is used for model training in either scenario
- AI Assistant and related models are hosted on secure infrastructure
- All data is encrypted in transit and at rest
Transparency
- Each query shows execution progress (Query Logs, Query Exceptions, etc.)
- “View in …” links let you verify the exact queries being run
- All AI-generated filters and queries are visible and editable
Troubleshooting
AI Assistant not loading?
- Ensure you have an active Last9 account with data flowing
- Try refreshing the page or starting a new chat
Not getting expected results?
- Try rephrasing your question with more specific details
- Include time ranges (e.g., “in the last hour”)
- Specify service or environment names if relevant
Please get in touch with us on Discord or Email if you have any questions.