Skip to content
Last9 Last9

Query Builder

Advanced logs query builder mode to parse, filter, transform, and aggregate logs without writing LogQL

Introduction#Copy link

The Logs Explorer Query Builder provides a visual interface for constructing log analysis queries through a series of stages or operation blocks. Each stage represents a specific data manipulation function.

Last9 Logs Query Builder mode in action

Core Stages#Copy link

Filter#Copy link

The FILTER stage allows data filtering based on conditions using various operators:

Supported Operators:

  • =: Exact match
  • !=: Not equal to
  • contains: String contains substring
  • not contains: String does not contain substring
  • matches: Matches a pattern/regular expression

Parameters:

  • Field name: The field to apply the filter on
  • Operator: One of the supported operators
  • Value: The comparison value or pattern

A single field and operator can have multiple values. These values are “ORed”. To get the equivalent of AND, use multiple filter stages.

Examples:

Terminal window
resource_source = "kubernetes/infosec/trufflehog"
repository != "https://github.com/brightcove/trufflehog.git"
message contains "error"
user_agent not contains "bot"
email matches ".*@company\.com"

Parse#Copy link

The PARSE stage supports multiple parsing methods:

  1. JSONP

    • Parameters:

      • from: Source field to parse. Default is the log body
      • fields: Optional comma-separated list of fields to extract
    • Example: json from body fields DetectorName as credential_type

  2. REGEXP

    • Parameters:

      • from: Source field to apply regex on. Default is the log body
      • pattern: Regular expression pattern with one or more capture groups
      • into: Target field names, one for each capture group in the regex pattern
    • Examples:

      regexp from finding ((?i)client[_-]?id[=\s\'\"]+([\'\"]*(\w|<|>|;|${})+)) into client_id
      regexp from finding ((?i)[^A-Za-z0-9]([A-Za-z0-9]{40}?)[^A-Za-z0-9]) into liveapikey

Transform#Copy link

The TRANSFORM stage provides several data transformation methods:

  1. splitByString

    • Parameters:

      • from: Source field
      • on: Delimiter character
      • select part: Part number to select
      • as: Output field name
    • Example:

      splitByString from link on # select part 1 as real_link
  2. replaceRegex

    • Parameters:

      • from: Source field
      • pattern: Regex pattern to match
      • with: Replacement string
      • as: Output field name
    • Example:

      replaceRegex from user_email /[<>]/, with - as user_email
  3. concat

    • Parameters:

      • from: First field to concatenate
      • Additional fields/literals to concatenate
      • as: Output field name
    • Example:

      concat real_link, '#', line_number as link
  4. if

    • Parameters:

      • Condition: isEqual, !isEqual, isEmpty, !isEmpty
      • then: Value if condition is true
      • else: Value if condition is false
      • as: Output field name
    • Example:

      if !isEmpty client_id then client_id else finding as finding

Aggregate#Copy link

The AGGREGATE stage supports various statistical computations with optional grouping:

  1. Zero or One Argument Functions:

    • count
      • Usage without argument: count as total_count
      • Usage with argument: count field_name as field_count
  2. Single Argument Functions:

    • sum: Calculate sum of values

    • min: Find minimum value

    • max: Find maximum value

    • avg: Calculate average

    • median: Find median value

    • stddev: Calculate sample standard deviation

    • stddev_pop: Calculate population standard deviation

    • variance: Calculate sample variance

    • variance_pop: Calculate population variance

    • Usage Syntax: function_name field as result_name

  3. Two Argument Functions:

    • quantile: Calculate approximate quantile. Quantile value is between 0 to 1.

    • quantile_exact: Calculate exact quantile. Quantile value is between 0 to 1.

    • Usage: quantile(0.99, bytes_transferred as bytes_p99)

Additional Features:

  • groupby: Group results by specified fields. Similar to a SQL groupby
    • Example: groupby user_agent as user_agent, service as service

Note:

Each aggregation stage limits available fields to those defined in the as clause of the previous stage.

Field Handling#Copy link

Field Aliasing#Copy link

Use the as keyword to create aliases for fields:

SourceName as event_id
Raw as finding

Nested Fields#Copy link

Access nested data using dot notation:

SourceMetadata.Data.Github.email
SourceMetadata.Data.Github.repository
SourceMetadata.Data.Github.file

Query Construction Best Practices#Copy link

  1. Operation Order

    • Start with FILTER operations to reduce data volume
    • Follow with PARSE operations to extract needed fields
    • Apply TRANSFORM operations last for final data shaping
    • Apply AGGREGATES operations last for final data shaping
    • Applying FILTER after TRANSFORM and AGGREGATE, is a common pattern
  2. Field Naming

    • Use descriptive names for output fields
    • Maintain consistent naming conventions
    • Avoid special characters in field names
  3. Error Prevention

    • Use !isBlank checks before transformations
    • Validate regex patterns before using them (You can use something like https://regex101.com/)
    • Check field existence before accessing nested data
  4. Performance Optimization

    • Keep time ranges reasonable
    • Use = on Service for much faster query executions
    • Filter early in the query chain
    • Prefer querying on pre-extracted attributes rather than operations on Body. Use the control plane to extract from body during ingestion.

Troubleshooting#Copy link

Please get in touch with us on Discord or Email if you have any questions.