It’s 2 AM. An incident’s in progress. Error rates are climbing.
You jump into the logs, filter by service, adjust the time window… and now you need a LogQL query.
You write one. It errors out.
You fix the syntax, try again, only to realize you need a different filter or a new aggregation. Back to rewriting.
By the time you’ve got the query right, you’ve already lost 10–15 minutes. The system is still broken, and you still don’t know why.
This is the reality of logs during an incident. Even if you’re fluent in LogQL, it slows you down. If you're not, you're stuck copy-pasting from docs or old dashboards, hoping something works.
That works when you’re automating known issues with the Query API. You define the pattern once and reuse it. But most problems aren’t that clean. During an incident, you’re testing hypotheses. Switching fields. Rethinking filters. You need to move fast, without rewriting queries from scratch every time.
That’s what the Query Builder solves.
It’s a visual tool for log exploration. No LogQL. No syntax.
Why Visual Query Building Matters
Once you’ve ruled out the obvious alerts, the real investigation begins. You’re looking for patterns, correlations, edge cases, things dashboards won’t show.
Maybe you start with a filter on status code. Then group by client ID to isolate noisy tenants. A few outliers stand out. You tweak the time window, compare regions, and try a different field. It’s not always linear. You follow clues, discard dead ends, and try something else.
This back-and-forth is where traditional query languages slow you down. Every iteration requires new syntax. Every wrong guess breaks your momentum.
With visual queries, you stay in flow. Each step, filter, group, and transform feels like part of the investigation, not a detour into documentation. You can test ideas quickly, adjust context instantly, and move from “what’s happening” to “why” without rewriting from scratch.
Example: Auth Failure Debug with Query Builder
You receive an alert from the Query API: a spike in error logs from the auth
service over the last 15 minutes. It's not just noise; the volume is well above the normal threshold.
You need to figure out where the failures are coming from and what’s causing them.
Traditional LogQL approach:
{service="auth"} |= "error" | json | __error__ = "" | user_id, error_code, client_id
You hit an issue—__error__
isn't a recognized field in this dataset. You’re forced to pause and dig through LogQL documentation to troubleshoot field extraction. Meanwhile, you still have no clarity on the nature of the errors.
Using Query Builder instead:
- Step 1: Apply filters
- Set
service = "auth"
- Add
level = "error"
→ Immediately returns 1,200 log entries for the past hour, far above typical volume.
- Set
- Step 2: Parse JSON log body
- Select
body
as the source for parsing - Extract fields:
user_id
,error_code
,client_id
→ Parsed values are displayed per log line. No trial-and-error needed.
- Select
- Step 3: Add a derived field
- Create a virtual column using conditional logic:
if user_id == null then "anonymous" else "authenticated"
- Label the output as
user_type
→ You now see that ~80% of these error entries are associated withuser_type = anonymous
.
- Create a virtual column using conditional logic:
- Step 4: Aggregate by key fields
- Group logs by
error_code
anduser_type
- Count the number of entries per group
→ Most errors fall underinvalid_token
, specifically tied toanonymous
users onclient_id = mobile-app-v2
.
- Group logs by
Outcome:
You’ve isolated the source of the spike to a single client sending invalid tokens on unauthenticated requests. No syntax debugging. No field mismatch errors. Total time: under a minute.
3 Workflows Where Visual Queries Save Time
1. Debug Broken Queries Without a Full Deploy
Your automated workflows rely on saved queries, but one day, a query that worked fine starts returning empty results or throwing errors. Maybe a field name changed. Maybe the log format shifted. Or maybe the volume grew too large and broke your assumptions.
With the Query Builder, you don’t need to edit JSON by hand or redeploy just to test fixes.
- Rebuild the query visually using the same logic.
- Run each stage independently: filters, parsing, transforms.
- Spot exactly where the data stops making sense.
- Adjust interactively, test immediately.
- Export the working query back into automation.
You cut debugging time from hours to minutes, and avoid chasing bugs in code with no feedback loop.
2. Make Logs Usable Outside of Engineering
Logs are a goldmine for everyone. Product wants insight into user journeys. Support needs to trace failed sessions. Security is looking at login behavior.
The problem: they’re not fluent in LogQL and don’t want to debug YAML.
The Query Builder changes that.
Say someone wants to track mobile user behavior:
- Filter →
user_agent
contains"mobile"
- Parse → Pull out
action
,user_id
,session_id
- Transform → Tag actions as
"purchase"
or"browse"
- Aggregate → Count by action type and hour
Each step is visible and editable. The structure is easy to understand. No need to ping engineering every time they want to tweak the view.
3. Clean Up Messy Logs for Exploration
Production logs are rarely clean. You’ll hit malformed JSON, missing fields, and inconsistent schemas, especially across service versions.
Rather than patching these in code or throwing them out, use the Query Builder to build a cleanup pipeline:
- Filter → Grab logs from the
payment
service over the past hour - Parse → JSON-parse the body with error handling
- Transform → Fallback to a
tracking_id
iftransaction_id
is missing - Filter → Drop rows with
amount <= 0
or invalidstatus
- Aggregate → Sum
amount
bystatus
and time window
You see how each stage changes the data in real-time. Which logs get dropped? Where fields go missing. What patterns surface? It’s an easy way to isolate schema issues before they silently skew dashboards or alerts.
Built for Speed and Scale in Production
Visual tools often fall short when logs pile up. They’re either too slow to use during incidents or too abstract to trust when something breaks. The Last9 Query Builder avoids those traps by focusing on speed, control, and clear feedback.
Early-stage filtering
Every query starts by reducing the noise. Filters are pushed down as early as possible in the execution plan, so you’re not wasting resources on data you’re going to discard anyway. Only the fields you reference are loaded, keeping memory usage and response sizes under control.
Safety checks before you hit run
If a query looks expensive or malformed, like a regex across unbounded fields or a missing timestamp, it gets flagged before execution. No more accidentally slamming the backend during a quick test.
Live results as you build
Each stage shows you exactly what changed: how many records matched, what fields were parsed, and where the structure breaks. The output updates in real-time without needing a full rerun. You can spot mistakes early and fix them without jumping through hoops.
Defaults that stay out of the way
The builder suggests smart defaults based on your usage patterns:
- Time ranges that balance recency and volume
- Filter hints like
env
,level
, andservice
that are common across logs - Validation for common pain points like inefficient regexes
All of this adds up to a visual interface that doesn’t slow you down. It’s responsive when debugging live traffic and reliable when setting up longer-running queries for monitoring or alerting.
Advanced Use Cases for Production Logs
The Query Builder is built to support real-world operational tasks: tracing issues across services, using regex safely, and monitoring log quality across environments.
Trace Issues Across Multiple Services
Incidents rarely stay isolated. A failure in the billing service might start as a timeout in the API gateway. To understand the full picture, you need to trace a request as it moves through each system.
Start by narrowing down each service:
- Query API Gateway logs to extract
request_id
, then tag the source asgateway
. - Do the same for the downstream service, say
payment
tag it accordingly. - Once both sets are available, use the shared
request_id
to correlate behavior across systems.
This lets you identify where things start breaking, whether it’s retries piling up downstream or mismatched status codes between services.
Use Regex Safely at Scale
Regex is often necessary to extract fields from loosely structured logs. But in production, poorly scoped patterns can be costly.
The Query Builder provides immediate feedback, so you can validate regex before running it at scale. For example:
- Parse the
user_agent
field using a regex likeMozilla\\/.*
. - Create a derived field: if it matches, label as
human
; if not, tag asbot
.
You can test the pattern, see how it performs, and inspect matched results before applying it broadly.
Monitor Log Quality in Real Time
Log quality isn’t just about formatting; it affects how easily you can troubleshoot later. Missing fields, inconsistent schemas, or parse errors can create blind spots.
Use the Query Builder to check log completeness:
- Filter logs by service or environment.
- Parse JSON fields and flag entries where expected fields like
user_id
,status
, ortimestamp
are missing. - Group by field or service to identify patterns, e.g., one team pushing logs without timestamps.
This helps catch instrumentation gaps before they turn into support tickets or broken dashboards.
Lock the Pattern into a Query
You’ve found the issue: unauthenticated requests from mobile-app-v2
triggering invalid_token
errors.
Now that you’ve explored and validated the pattern visually, you can turn it into a reusable query using the Last9 Query API.
Here’s how that logic translates:
query_stages = [
{"type": "filter", "field": "service", "value": "auth"},
{"type": "filter", "field": "level", "value": "error"},
{"type": "parse", "method": "json", "fields": ["user_id", "error_code", "client_id"]},
{"type": "transform", "method": "if", "condition": "isEmpty user_id",
"then": "anonymous", "else": "authenticated", "as": "user_type"},
{"type": "aggregate", "function": "count", "groupby": ["error_code", "user_type", "client_id"]}
]
To run it, just call the API:
response = requests.post(
"https://api.last9.io/v1/query",
headers={"Authorization": f"Bearer {token}"},
json={"stages": query_stages, "time_range": "1h"}
)
Each stage reflects a step you took in the UI:
- Filter → Scoped down to
auth
service logs witherror
level - Parse → Pulled out
user_id
,error_code
,client_id
from the log body - Transform → Created a new
user_type
field based on whetheruser_id
is set - Aggregate → Counted occurrences by
error_code
,user_type
, andclient_id
What started as a manual investigation now runs as a structured query, ready for dashboards, scheduled reports, or alerting. The Query Builder gave you a repeatable, production-ready pattern you can lock in and reuse, with no extra effort.
Fits Where You Already Work
The Query Builder integrates cleanly with how engineers already debug, collaborate, and deploy.
If you're managing an incident, updating internal docs, or shipping structured logging changes, it’s designed to slot into existing workflows.
Share Query Context in Slack
During incidents, speed and shared context matter. Instead of describing log filters or copy-pasting LogQL, drop a direct link to a saved Query Builder pattern:
Payment errors spiking. Check this: [Query Builder URL]
The link opens a pre-configured query, filters, parsers, and transforms already applied. Anyone on the channel can modify it, rerun with different time ranges, or isolate edge cases. This reduces friction during live debugging and shortens response times.
Replace Static Runbooks with Interactive Queries
Teams often document example queries in Confluence or markdown, but static LogQL snippets age quickly, and screenshots are rarely useful.
Instead, use links to live Query Builder templates. These:
- Reflect the current version of the query, even after changes
- Return actual log data, not static output
- Allow engineers to change parameters and rerun on demand
This approach improves internal documentation without creating extra maintenance overhead.
Validate Logging Conventions in CI/CD
You can export Query Builder patterns to JSON and run them through the Query API in your CI/CD pipeline. This enables structured validation during deployment:
- Check that logs contain required fields and tags
- Detect regressions in log structure or parsing
- Block deployments if log format changes would break downstream tools or alerts
This helps enforce consistency across teams and services without manual review.
Query Builder vs Query API
Use Case | Query Builder | Query API |
---|---|---|
Incident investigation | Yes | No |
Pattern discovery | Yes | No |
Team collaboration | Yes | Sometimes |
Automated monitoring | No | Yes |
Alerting pipelines | Rarely | Yes |
Prototyping queries | Yes | No |
Use Query Builder for fast exploration, debugging, and collaboration.
Use Query API when you need repeatable checks, integration into pipelines, or automated monitoring workflows.
How to Get Started
If you're already familiar with the Query API, the quickest way to try Query Builder is to rebuild an existing query visually. Here's how to map your existing logic:
- Set the same filters for
service
and time range - Add a parse step to extract fields (e.g.,
error_code
,user_id
) - Apply
transform
,map
, orgroup by
as needed - Run and compare the results side by side
You’ll notice how much faster it is to iterate, no need to re-run the full query just to adjust a field or time range.
Example: Start with a Minimal Query
A simple query setup looks like this:
- Filter:
service = "api"
- Parse:
json.error
,json.user_id
- Group By:
json.error
This surfaces recurring issues, invalid user input, or noisy error codes, without needing to write LogQL manually.
Why This Approach Works
Query Builder removes friction from log analysis while keeping full control over the query logic.
- You still define filters, parsing rules, and transformations
- You just do it in a structured interface that validates as you go
- The query output stays consistent with the Query API
You can export the same logic as LogQL or pass it into CI pipelines. What changes is the interface, not the capability.
For debugging production issues or spotting anomalies quickly, Query Builder reduces overhead. You don’t waste time on syntax errors or guessing field names. And when you're ready, the same query runs in automated checks or saved views, no rewrite needed.
It scales with your workflow, whether you're troubleshooting, onboarding a new teammate, or tightening up log hygiene.