Skip to content
Last9
Book demo

AI Assistant

Ask questions about your infrastructure in plain English and get instant insights from logs, metrics, traces, and alerts.

Last9’s AI Assistant brings conversational observability directly into your workflow. Ask questions like “What’s causing the recent spike in errors?” or “Show me the system health overview” and get instant, actionable insights from your telemetry data.

AI Assistant Welcome Screen

AI Assistant vs Last9 MCP

Last9 offers two complementary AI-powered interfaces for working with your observability data:

AI AssistantLast9 MCP
WhereBuilt into the Last9 dashboardYour IDE (Cursor, VS Code, Windsurf, Claude Desktop)
Best forInvestigating incidents, reviewing system health, exploring alertsDebugging production issues while coding, correlating code changes with production
How it worksNatural language queries against your telemetry data in the browserMCP protocol connects your AI coding agent to Last9’s observability APIs

The AI Assistant is ideal when you’re in the Last9 dashboard and want to quickly interrogate your infrastructure — check system health, investigate errors, or review alerts without writing queries.

Last9 MCP brings production context into your development environment. When you’re debugging code in your IDE, MCP lets your AI coding agent query exceptions, logs, traces, and metrics without leaving the editor.

Use them together: start with the AI Assistant for broad investigation, then switch to MCP in your IDE when you’re ready to write the fix.

Key Features

Natural Language Queries

Ask questions in plain English about your infrastructure:

  • “What errors are happening currently?”
  • “Show me the p95 latency for my services”
  • “Are there any 5xx errors?”
  • “Analyze the performance of my API”

Quick Actions

The AI Assistant provides quick action cards to help you get started:

Quick ActionDescription
Recent ErrorsLatest error patterns and incidents across your services
PerformanceP95 latency and response metrics for your applications
Active AlertsCurrent system alerts status and firing alerts
Database HealthPerformance and usage insights for your databases
Trace AnalysisIdentify slow request patterns and bottlenecks
System OverviewComprehensive health report across all services

Query Execution Progress

When you ask a question, the AI Assistant shows real-time progress as it queries your observability data:

  • Query Logs - Searching log data for relevant information
  • Query Exceptions - Analyzing application exceptions and errors
  • Query Metrics - Fetching performance metrics
  • Query Traces - Examining distributed traces

Each step shows a completion status, so you can see exactly what data sources are being analyzed.

Every query the AI Assistant executes includes a View in … link that takes you directly to the underlying data in Last9.

AI Assistant Deep Links

This allows you to:

  • Continue your investigation with the full query interface
  • Explore the underlying data in more detail
  • Share specific queries with your team
  • Build on the AI-generated queries with additional filters

Deep links preserve the exact query, time range, and filters, making it easy to transition from conversational exploration to detailed analysis.

Intelligent Analysis

The AI Assistant doesn’t just return raw data—it provides intelligent analysis including:

  • Service Health Tables: Visual overview of all services with throughput, error rates, response times, and health status
  • Key Findings: Automatically identified critical issues and positive indicators
  • Structured Error Details: Service name, environment, error type, and error messages for quick diagnosis
  • Alert Configuration Summary: Overview of configured alert rules and active instances
  • Recommended Actions: Prioritized action items (Immediate, High Priority, Monitor, Review)

Getting Started

  1. Navigate to AI Assistant

    Click on AI Assistant in the left sidebar of your Last9 dashboard, or click the AI Assistant card on the Home page.

  2. Ask a question or use Quick Actions

    Type your question in the input field, or click one of the quick action cards to get started immediately.

  3. Review the analysis

    The assistant will gather information from your observability data and present:

    • Service health status
    • Active alerts and their severity
    • Key findings and recommendations
    • Actionable next steps
  4. Continue the conversation

    Ask follow-up questions to dive deeper into specific services, alerts, or issues. The assistant maintains context throughout your conversation.

Example Use Cases

System Health Overview

"Give me a system health overview"

The assistant provides:

  • Available environments (Production, Staging, etc.)
  • Service health table with throughput, error rates, and response times
  • Critical issues with firing alerts
  • Positive indicators (no exceptions, healthy services)
  • Recommended actions prioritized by urgency

Error Investigation

"Which errors are happening currently?"

The assistant analyzes:

  • Recent exceptions across services
  • Error patterns and frequencies
  • Affected endpoints and services
  • Suggested investigation steps

Performance Analysis

"What is the p95 latency for my services?"

The assistant returns:

  • P95 response times for each service
  • Services exceeding thresholds (marked as Slow or Critical)
  • Performance degradation details
  • Comparison with configured thresholds

Alert Review

"Show me active alerts"

The assistant displays:

  • Currently firing alerts with severity levels
  • Alert duration and trigger times
  • Threshold values vs current values
  • Recommended remediation steps

Chat History

Your conversations are saved in the chat history sidebar, allowing you to:

  • Resume previous investigations
  • Reference past analyses
  • Track recurring issues over time

To start a fresh conversation, click the + New chat button.

Ask Mode in Logs and Traces

In addition to the dedicated AI Assistant, you can use natural language queries directly within the Logs and Traces explorers through Ask Mode.

Using Ask Mode

Navigate to Logs Explorer or Traces Explorer and click the Ask tab to access AI-powered querying.

Ask Mode in Traces

Type your question in natural language, such as:

  • “investigate slow requests from last9 api”
  • “show me errors from the payment service”
  • “find database queries taking more than 1 second”

The AI will generate the appropriate filters and display matching results.

Ask Mode Results in Logs

Quick Start Templates

Ask Mode provides pre-built templates to help you get started quickly:

TemplateDescription
Error TracesShow all traces with error status codes
Slow TracesFind traces with duration greater than 1 second
Database QueriesFilter traces containing database operations
HTTP RequestsShow traces with HTTP status codes and request paths
Failed RequestsView traces with 4xx and 5xx status codes
Service ErrorsDisplay traces from specific service with errors

Team Queries

Ask Mode also displays saved queries from your team, making it easy to reuse common investigation patterns across your organization.

Best Practices

  • Start broad, then narrow: Begin with general queries like “system health overview” before diving into specific services
  • Include context: Mention time ranges or specific services when investigating issues
  • Use follow-up questions: The assistant maintains conversation context, so ask “tell me more about that service” to dive deeper
  • Review recommended actions: The AI prioritizes actions by urgency—address Immediate items first

Privacy and Security

The AI Assistant functions as a copilot for telemetry data, including logs, metrics, traces, events, dashboards, and alerts. This feature is optional and can be enabled only with explicit administrator consent within your organization’s Last9 account.

How the AI Works

The AI Assistant provides a natural language interface for querying telemetry data. When you ask questions like “Why is the 5xx error rate increasing?” or “Explain this alert,” the system uses a Large Language Model (LLM) to interpret and respond using telemetry data available within the platform.

Use of LLMs:

  • The LLM converts your natural language queries into Last9’s internal query format
  • No customer telemetry data is transmitted to the LLM during this translation step
  • When summarization or interpretation is needed, only the minimal necessary data is processed

Data Shared with LLMs

  1. Limited metadata (e.g., tags, labels) when needed for query translation
  2. Query results only when analysis is explicitly requested

For example, if you ask “Are there any 5xx errors in the login service?”, the system:

  1. Executes the internal Last9 query
  2. Processes the response internally
  3. Applies sanitization and PII removal
  4. Uses the LLM only for high-level summarization

Model Configuration

You can choose between:

  • Last9-managed models: Frontier LLMs under strict security controls and data minimization practices
  • Bring Your Own LLM (BYOL): Connect your own model for complete control over AI processing

Security guarantees:

  • No customer data is used for model training in either scenario
  • AI Assistant and related models are hosted on secure infrastructure
  • All data is encrypted in transit and at rest

Transparency

  • Each query shows execution progress (Query Logs, Query Exceptions, etc.)
  • “View in …” links let you verify the exact queries being run
  • All AI-generated filters and queries are visible and editable

Troubleshooting

AI Assistant not loading?

  • Ensure you have an active Last9 account with data flowing
  • Try refreshing the page or starting a new chat

Not getting expected results?

  • Try rephrasing your question with more specific details
  • Include time ranges (e.g., “in the last hour”)
  • Specify service or environment names if relevant

Please get in touch with us on Discord or Email if you have any questions.