Skip to content
Last9 Last9

Using RUM

Navigate the RUM dashboard, filter performance data, and analyze Core Web Vitals to optimize user experience.

Real User Monitoring

Real User Monitoring provides comprehensive insights into your web application’s real-world performance. Learn how to navigate the interface, filter data effectively, and interpret Core Web Vitals metrics.

Dashboard Overview

The RUM dashboard is organized into three main views:

  • Performance: Core Web Vitals analysis and traffic insights
  • Errors (Coming Soon): JavaScript errors and failed requests
  • Sessions (Coming Soon): User journey and engagement analysis

Filtering Your Data

Effective filtering helps you focus on specific performance issues and user segments.

Global Filters

  1. Application: Choose which application to analyze when you have multiple services instrumented with RUM.
  2. Environment: Filter by deployment environment to compare performance across production, staging, development
  3. Version: Compare performance between different application versions to measure the impact of releases.
  4. Time Period: Select your analysis timeframe in the top-right corner. The default is the last 30 minutes, but you can extend to hours or days for trend analysis.

Path and Attribute Filtering

Use the search bar to filter by specific pages or user attributes.

Path Filtering

Filter by URL paths using these operators:

  • = (equals): Exact path match
  • exists: Pages where the path field is present
  • not exists: Pages missing path information
  • starts with: Paths beginning with specific text
  • ends with: Paths ending with specific text
  • contains: Paths containing specific text
  • does not contain: Paths excluding specific text

Example: path starts with /api to analyze API endpoint performance

Attribute Filtering

Filter by any collected attribute, including:

Page Attributes:

  • origin: Page origin (protocol + hostname)
  • page.hash: Current page hash
  • page.hostname: Current page hostname
  • page.route: Current page route/path
  • page.search: Current page search/query string
  • page.url: Current page URL
  • url.path: Current page path
  • path: Folded path (for navigation)

Web Vitals Attributes:

For each web vital metric (CLS, FCP, LCP, TTFB, INP), the following attributes are collected (where applicable):

  • web_vital.<metric>.id: Unique ID for the metric
  • web_vital.<metric>.value: Value of the metric
  • web_vital.<metric>.timestamp: Timestamp when the metric was recorded
  • web_vital.<metric>.rating: Rating (e.g., ‘good’, ‘needs-improvement’, ‘poor’)
  • web_vital.<metric>.delta: Delta value
  • web_vital.<metric>.entries_count: Number of entries
  • web_vital.<metric>.start_time: Start time (for TTFB)
  • web_vital.<metric>.duration: Duration (for TTFB)
  • web_vital.<metric>.fetch_start: Fetch start time (for TTFB)
  • web_vital.<metric>.response_start: Response start time (for TTFB)
  • web_vital.<metric>.request_start: Request start time (for TTFB)
  • web_vital.<metric>.waiting_duration: Waiting duration (for TTFB, attribution)
  • web_vital.<metric>.dns_duration: DNS duration (for TTFB, attribution)
  • web_vital.<metric>.connection_duration: Connection duration (for TTFB, attribution)
  • web_vital.<metric>.request_duration: Request duration (for TTFB, attribution)
  • web_vital.navigation_type: Navigation type (e.g., ‘reload’, ‘navigate’)

Performance Analysis

Overview Metrics

Views Over Time: Track user traffic patterns throughout your selected time period. Use this to identify peak usage times, traffic spikes, or unusual drops that might indicate issues.

Top Paths by Views: See which pages receive the most traffic. This helps you prioritize optimization efforts on high-impact pages.

Web Vitals

Each Web Vital can be analyzed at different percentiles (P75, P90, P99) to understand performance distribution across your user base.

Time To First Byte (TTFB)

  • Definition: TTFB measures server responsiveness — the time between a user’s request and when the first byte of response arrives.
  • Performance Thresholds:
    • 🟢 Good: ≤ 800ms
    • 🟡 Needs Improvement: ≤ 1.8s
    • 🔴 Poor: > 1.8s
  • What It Means: High TTFB indicates server-side performance issues. This could be slow database queries, inefficient server processing, or network latency between your server and users.
  • Optimization Focus: Server performance, database optimization, CDN usage, caching strategies.

Largest Contentful Paint (LCP)

  • Definition: LCP tracks when the main content becomes visible - specifically when the largest content element (image, video, or text block) finishes rendering.
  • Performance Thresholds:
    • 🟢 Good: ≤ 2.5s
    • 🟡 Needs Improvement: ≤ 4s
    • 🔴 Poor: > 4s
  • What It Means: LCP directly impacts perceived loading performance. Users judge page speed based on when they see the main content, not when everything finishes loading.
  • Optimization Focus: Image optimization, lazy loading, critical resource prioritization, server response times.

First Contentful Paint (FCP)

  • Definition: FCP measures when users first see any content - the time until the first text, image, or other element appears.
  • Performance Thresholds:
    • 🟢 Good: ≤ 1.8s
    • 🟡 Needs Improvement: ≤ 3s
    • 🔴 Poor: > 3s
  • What It Means: FCP indicates how quickly users perceive your page is starting to load. Even if it’s just a small element, it signals that something is happening.
  • Optimization Focus: Critical CSS, font loading strategies, eliminating render-blocking resources.

Cumulative Layout Shift (CLS)

  • Definition: CLS quantifies visual stability by measuring unexpected layout shifts during page loading.
  • Performance Thresholds:
    • 🟢 Good: ≤ 0.1
    • 🟡 Needs Improvement: ≤ 0.25
    • 🔴 Poor: > 0.25
  • What It Means: High CLS creates frustrating experiences when elements move unexpectedly as users try to interact with your page. This often happens when images load without defined dimensions or ads insert dynamically.
  • Optimization Focus: Size attributes for images/videos, space reservation for dynamic content, font loading optimization.

Interaction to Next Paint (INP)

  • Definition: INP measures interface responsiveness by tracking the time between user interactions (clicks, taps, keystrokes) and the next visual update.
  • Performance Thresholds:
    • 🟢 Good: ≤ 200ms
    • 🟡 Needs Improvement: ≤ 500ms
    • 🔴 Poor: > 500ms
  • What It Means: INP affects how responsive your application feels. Long delays between user actions and visual feedback make interfaces feel sluggish and unresponsive.
  • Optimization Focus: JavaScript optimization, reducing main thread work, efficient event handlers, code splitting.

Best Practices

  • Regular Monitoring: Check your RUM dashboard weekly to catch performance regressions early.
  • Focus on High-Impact Pages: Prioritize optimization efforts on pages with high traffic and poor performance.
  • Monitor All Percentiles: Don’t just look at averages - P99 shows what your worst-performing users experience.
  • Correlate with Deployments: Use version filtering to understand how releases affect performance.
  • Set Performance Budgets: Define acceptable thresholds for each metric and monitor against them.

Troubleshooting

Please get in touch with us on Discord or Email if you have any questions.