Discover Exceptions
Monitor, investigate, and resolve application exceptions across all your services with detailed context, correlated traces, and AI-powered analysis
The Exceptions feature in Discover gives you a unified view of all application exceptions across your services. Instead of checking each service individually, you can see every exception type, its frequency, severity, and the service it originates from in a single table. Drill into any exception to view correlated traces, request context, and logs for fast root cause analysis.

Prerequisites
To use the Exceptions feature, ensure you have the following integrations configured:
Required:
- Traces: Distributed tracing data is mandatory for exception detection and correlation. Configure OpenTelemetry or other tracing instrumentation for your applications. See all traces integrations.
Optional:
- Logs: Application logs provide additional troubleshooting context when investigating exceptions. Configure log forwarding from your applications. See all logs integrations.
Understanding the Exceptions Dashboard
Access the Exceptions dashboard at Discover > Exceptions in Last9.
The dashboard displays all detected exceptions in a sortable table. Each row represents a unique combination of exception type, service, and operation.
Table Columns
| Column | Description |
|---|---|
| Exception Type | The exception class or error code (e.g., HttpError, TypeError, ECONNREFUSED, errorString) |
| Service | The service where the exception originated, shown with a language/runtime icon |
| Operation | The span name or API endpoint where the exception occurred (e.g., POST, View, sql:query) |
| Operation Type | The span kind: Server, Client, Internal, Producer, or Consumer |
| Count | Total number of occurrences in the selected time range. Default sort is by count descending |
| Severity | Color-coded badge based on occurrence count |
| Last Seen | How recently the exception last occurred (e.g., “3m ago”, “0s ago”) |
Severity Levels
Severity is automatically assigned based on the exception count within the selected time range:
| Severity | Count Threshold | Badge Color |
|---|---|---|
| Critical | 1,000+ occurrences | Red |
| High | 100 - 999 occurrences | Orange |
| Medium | 10 - 99 occurrences | Yellow |
| Low | 1 - 9 occurrences | Blue |
Click any column header to sort the table by that column. Click an exception row to open the detail panel.
Filtering Exceptions
The left sidebar provides filters to narrow down the exceptions list.
Default filters:
- Service: Filter by one or more services
- Exception Type: Filter by specific exception classes
- Severity: Filter by severity level (Critical, High, Medium, Low)
Dynamic label filters:
Additional filter sections appear based on the labels present in your trace data. Common examples include process_runtime_name, process_runtime_version, telemetry_sdk_language, and segment. These vary by organization and environment depending on the attributes your instrumentation reports.
- Expand any filter category in the left sidebar
- Select one or more values
- Click Apply Filters to update the table
- Click Clear to reset all filters
Additional controls at the top of the page:
- Environment selector: Switch between deployment environments (e.g., production, staging)
- Date picker: Set the time range for exception data
Exception Details
Click any exception row to open the detail panel on the right side of the page.

The detail panel header shows:
- Exception type with severity badge
- Service name
- Operation name
- Occurrences count for the selected time range
- Occurrences chart: A time series showing exception frequency over time, useful for spotting spikes and correlating with deployments
Use the up/down arrows in the top-right corner to navigate between exceptions without closing the panel.
Traces
The Traces tab lists all correlated traces where this exception occurred. Each trace row shows:
| Column | Description |
|---|---|
| Start Time | Timestamp of the trace |
| Trace ID | Unique identifier for the distributed trace |
| Operation & Service | The operation name and originating service |
| Duration | Total trace duration |
| Kind | Span kind (Internal, Server, Client, etc.) |
| Type & Status | Shows “Error” status for exception-bearing spans |
Click any trace row to navigate to the full distributed trace view for detailed span-level analysis.
Context
The Context tab shows trace attributes from the exception span and its root span, including HTTP request details, user information, environment data, and custom attributes.

Context sections include:
- HTTP Request: Method, URL, route, status code, user agent, request/response size
- User Context: User ID, email, session ID (when available)
- Device & Environment: Device type, OS, browser, IP addresses
- Database: Database system, name, query, operation
- gRPC: RPC system, service, method, status code
- Custom Attributes: Any additional trace attributes your application reports
All values are copyable with a single click.
Logs
The Logs tab shows logs correlated with the exception, pre-filtered by service name and exception type.

Features include:
- Index selector: Choose between default, physical, or rehydration log indexes
- Pre-populated filters: Automatically set to
service = <service_name>andbody contains all <exception_type> - Log volume chart: Visual representation of log volume over time
- View in Logs: Opens the full Logs Explorer with the same filters applied
You can add or modify filters directly in the Logs tab to refine your search.
Adaptive Alerts
Adaptive alerts automatically detect unusual spikes in exception frequency using a statistical deviation model, reducing false positives compared to static threshold-based alerts. You can enable adaptive alerts for any exception directly from the detail panel.
Enabling Adaptive Alerts
- Click any exception row to open the detail panel
- Click Manage Adaptive Alerts in the top-right corner
- Toggle the switch to enable adaptive alerting for this exception
- The alert rule is created automatically with an optimized configuration based on the exception’s historical patterns
When you enable an adaptive alert, Last9 automatically:
- Creates an alert rule named
exception_alert_for_<exception_type>_<service>_<operation> - Monitors the
exception_countindicator for this specific exception - Uses the exception’s trend data to establish a baseline
- Detects anomalous spikes based on statistical deviation from the baseline
Managing Alert Rules
Once enabled, you can manage the alert rule like any other Last9 alert:
- View alert rules: Navigate to Alerting > Monitor to see all active alert rules
- Set notification channels: Configure where alerts are sent at Alerting > Notification Channels
- Inspect alerts: When an alert fires, click the inspect link in the notification to open the exception detail panel with the relevant time range pre-selected
Alert rules are grouped by environment and exception, making it easy to manage alerts across multiple services and operations.
AI-Powered Exception Analysis
If the AI Assistant is enabled for your organization, the Auto-fix Exception button appears in the detail panel.

Clicking it opens the AI-Powered Exception Resolution modal with two options:
| Option | Description | Requirement |
|---|---|---|
| Analyze Only | Get AI-generated insights and recommendations without making any changes to your code | AI Assistant enabled |
| Auto-Fix | An agent analyzes the exception, creates a fix PR, and deploys automatically | AI Assistant + Agents enabled |
Investigating Exceptions
Follow this workflow to efficiently triage and resolve exceptions:
- Spot high-impact exceptions: Sort by Count (default) or filter by Critical / High severity to focus on the most frequent exceptions
- Open the detail panel: Click an exception row to view the occurrences chart and identify when spikes occurred
- Check correlated traces: Use the Traces tab to examine individual traces and understand the execution path leading to the exception
- Review request context: Switch to the Context tab to see HTTP request details, user information, and custom attributes for additional debugging clues
- Search related logs: Use the Logs tab to find detailed error messages and stack traces around the time of the exception
- Set up alerts: Click Manage Adaptive Alerts to get notified of future spikes for this exception
- Use AI analysis: If available, click Auto-fix Exception to get AI-powered insights or an automated fix
Best Practices
Prioritization:
- Focus on Critical and High severity exceptions first, as they represent the highest volume errors
- Pay attention to exceptions with recent Last Seen timestamps, especially “0s ago”, indicating actively occurring issues
- Monitor
ECONNREFUSEDand network-related exceptions as they often indicate infrastructure problems
Monitoring Strategy:
- Review the Exceptions dashboard after every deployment to catch newly introduced errors
- Use the time range selector to compare exception counts before and after a release
- Set up adaptive alerts for critical exception types to get proactive notifications
Troubleshooting:
- Start with the Traces tab to understand the execution flow and identify where the exception originates
- Use the Context tab to check if exceptions are tied to specific users, endpoints, or request patterns
- Pivot to the Logs tab for detailed stack traces and error messages
- For service-specific exception analysis, navigate to the service’s Exceptions tab under Discover > Services
Troubleshooting
Please get in touch with us on Discord or Email if you have any questions.