Skip to content
Last9
Book demo

Discover Map

Visualize service dependencies and trace request flows across your distributed architecture

The Map feature in Discover provides a real-time topology view of your distributed architecture, showing how services communicate with each other and where performance issues originate. Each service node displays key metrics — throughput, error rate, and P95 latency — so you can assess the health of your entire system at a glance.

Map Overview

Use the Map to understand service dependencies, trace error propagation across services, and quickly identify bottlenecks during incident response.

Prerequisites

To use the Map feature, you need distributed tracing configured for your services:

Required:

  • Traces: Distributed tracing data is mandatory for automatic service discovery and dependency mapping. Configure OpenTelemetry or other tracing instrumentation for your applications. See all traces integrations.

Without traces, the Map cannot discover services or their relationships. Services appear on the Map automatically once trace data is received.

Understanding the Map View

Access the Map at Discover > Map in Last9.

Service Nodes

Each service appears as a node on the Map displaying three key metrics:

MetricDescription
ThroughputRequests per minute (rpm) handled by the service
Error RateNumber of failed requests per minute (rpm)
P9595th percentile response time

Connection Lines

Directed arrows between nodes represent service-to-service communication. The arrow points from the caller to the downstream dependency, showing the direction of request flow.

Health Indicators

The Map uses color coding to surface problems quickly:

IndicatorMeaning
Green borderService is healthy with no errors
Red borderService is experiencing errors
Red error rate textError rate is non-zero, displayed in red for visibility
Red dotted connectionErrors are occurring on requests between the connected services
Green solid connectionRequests between services are healthy

Map with Error Indicators

In the example above, the red borders on services and the dotted red connection line between them indicate active errors flowing through that path.

Filtering the Map

Use the toolbar at the top of the Map to filter and focus the view:

Environment: Select the environment to display (e.g., production, staging) using the Environment dropdown.

Service status filters:

FilterDescription
AllShow all discovered services
ErrorsShow only services with active errors
SlowShow only services with high response times

Time range: Adjust the time window using the time picker in the top-right corner. The Map displays service metrics aggregated over the selected period.

Interacting with Services

Click on any service node to open a context menu with quick actions:

Service Context Menu

ActionDescription
Focus DependenciesIsolate the selected service and its direct upstream and downstream dependencies, dimming unrelated services
View Service DetailsNavigate to the full Service Details page for in-depth performance analysis
View TracesOpen the Traces explorer pre-filtered to traces involving this service
View LogsOpen the Logs explorer pre-filtered to logs from this service

Use the controls in the bottom-left corner or standard input gestures to navigate:

  • Zoom in / out (+/-): Adjust the zoom level. You can also pinch to zoom on a trackpad or use the scroll wheel with a mouse.
  • Fit to screen: Reset the view to fit all services in the viewport
  • Pan: Drag on the canvas background to move around the Map

The minimap in the bottom-right corner shows your current viewport position within the full Map.

Best Practices

Incident Response:

  • Start with the Errors filter to see which services are affected
  • Use Focus Dependencies on the service closest to the error source to understand the blast radius
  • Follow red dotted connection lines to trace error propagation across services
  • Drill into traces and logs from the context menu for root cause analysis

Architecture Review:

  • Periodically review the full Map to understand how your architecture has evolved
  • Identify services with unusually high fan-out (many downstream dependencies) as potential single points of failure
  • Look for services with high P95 latency that may be bottlenecks for upstream callers

Performance Monitoring:

  • Use the Slow filter to identify services with degraded response times
  • Compare throughput across connected services to spot capacity imbalances
  • Monitor error rates on critical paths between high-traffic services

Troubleshooting

Please get in touch with us on Discord or Email if you have any questions.