Query Builder
Advanced logs query builder mode to parse, filter, transform, and aggregate logs without writing LogQL
Introduction#Copy link
The Logs Explorer Query Builder provides a visual interface for constructing log analysis queries through a series of stages or operation blocks. Each stage represents a specific data manipulation function.
Core Stages#Copy link
Filter#Copy link
The FILTER stage allows data filtering based on conditions using various operators:
Supported Operators:
- =: Exact match
- !=: Not equal to
- contains: String contains substring
- not contains: String does not contain substring
- matches: Matches a pattern/regular expression
Parameters:
- Field name: The field to apply the filter on
- Operator: One of the supported operators
- Value: The comparison value or pattern
A single field and operator can have multiple values. These values are “ORed”. To get the equivalent of AND, use multiple filter stages.
Examples:
resource_source = "kubernetes/infosec/trufflehog"repository != "https://github.com/brightcove/trufflehog.git"message contains "error"user_agent not contains "bot"email matches ".*@company\.com"
Parse#Copy link
The PARSE stage supports multiple parsing methods:
-
JSONP
-
Parameters:
- from: Source field to parse. Default is the log body
- fields: Optional comma-separated list of fields to extract
-
Example: json from body fields DetectorName as credential_type
-
-
REGEXP
-
Parameters:
- from: Source field to apply regex on. Default is the log body
- pattern: Regular expression pattern with one or more capture groups
- into: Target field names, one for each capture group in the regex pattern
-
Examples:
regexp from finding ((?i)client[_-]?id[=\s\'\"]+([\'\"]*(\w|<|>|;|${})+)) into client_idregexp from finding ((?i)[^A-Za-z0-9]([A-Za-z0-9]{40}?)[^A-Za-z0-9]) into liveapikey
-
Transform#Copy link
The TRANSFORM stage provides several data transformation methods:
-
splitByString
-
Parameters:
- from: Source field
- on: Delimiter character
- select part: Part number to select
- as: Output field name
-
Example:
splitByString from link on # select part 1 as real_link
-
-
replaceRegex
-
Parameters:
- from: Source field
- pattern: Regex pattern to match
- with: Replacement string
- as: Output field name
-
Example:
replaceRegex from user_email /[<>]/, with - as user_email
-
-
concat
-
Parameters:
- from: First field to concatenate
- Additional fields/literals to concatenate
- as: Output field name
-
Example:
concat real_link, '#', line_number as link
-
-
if
-
Parameters:
- Condition:
isEqual
,!isEqual
,isEmpty
,!isEmpty
- then: Value if condition is true
- else: Value if condition is false
- as: Output field name
- Condition:
-
Example:
if !isEmpty client_id then client_id else finding as finding
-
Aggregate#Copy link
The AGGREGATE stage supports various statistical computations with optional grouping:
-
Zero or One Argument Functions:
- count
- Usage without argument: count as total_count
- Usage with argument: count field_name as field_count
- count
-
Single Argument Functions:
-
sum: Calculate sum of values
-
min: Find minimum value
-
max: Find maximum value
-
avg: Calculate average
-
median: Find median value
-
stddev: Calculate sample standard deviation
-
stddev_pop: Calculate population standard deviation
-
variance: Calculate sample variance
-
variance_pop: Calculate population variance
-
Usage Syntax: function_name field as result_name
-
-
Two Argument Functions:
-
quantile: Calculate approximate quantile. Quantile value is between 0 to 1.
-
quantile_exact: Calculate exact quantile. Quantile value is between 0 to 1.
-
Usage: quantile(0.99, bytes_transferred as bytes_p99)
-
Additional Features:
- groupby: Group results by specified fields. Similar to a SQL groupby
- Example: groupby user_agent as user_agent, service as service
Note:
Each aggregation stage limits available fields to those defined in the as
clause of the previous stage.
Field Handling#Copy link
Field Aliasing#Copy link
Use the as
keyword to create aliases for fields:
SourceName as event_idRaw as finding
Nested Fields#Copy link
Access nested data using dot notation:
SourceMetadata.Data.Github.emailSourceMetadata.Data.Github.repositorySourceMetadata.Data.Github.file
Query Construction Best Practices#Copy link
-
Operation Order
- Start with FILTER operations to reduce data volume
- Follow with PARSE operations to extract needed fields
- Apply TRANSFORM operations last for final data shaping
- Apply AGGREGATES operations last for final data shaping
- Applying FILTER after TRANSFORM and AGGREGATE, is a common pattern
-
Field Naming
- Use descriptive names for output fields
- Maintain consistent naming conventions
- Avoid special characters in field names
-
Error Prevention
- Use !isBlank checks before transformations
- Validate regex patterns before using them (You can use something like https://regex101.com/)
- Check field existence before accessing nested data
-
Performance Optimization
- Keep time ranges reasonable
- Use = on Service for much faster query executions
- Filter early in the query chain
- Prefer querying on pre-extracted attributes rather than operations on Body. Use the control plane to extract from body during ingestion.
Troubleshooting#Copy link
Please get in touch with us on Discord or Email if you have any questions.