Skip to content
Last9
Book demo

OpenAI Codex CLI

Send OpenAI Codex CLI session telemetry — prompts, tool calls, model latency, and turn metrics — from `codex exec` and the interactive `codex` TUI to Last9 via OpenTelemetry.

OpenAI Codex emits structured OpenTelemetry data for every developer session: prompts, tool invocations, model API calls, turn-level latency, and token usage. Routing this data to Last9 lets you analyze AI usage patterns, audit tool decisions, track per-session latency, and alert on error rates — within your existing observability stack.

Codex exports three OpenTelemetry signal types:

  • Logs — structured events for prompts, websocket activity, model SSE events, and errors
  • Metrics — counters and histograms for sessions, turns, tool calls, MCP latency, and token usage
  • Traces — spans covering session lifecycle, model calls, tool dispatch, and MCP requests

What gets exported

Logs (events)

Each Codex session emits structured events under service.name = codex_exec (or codex_tui for the interactive TUI). Events share a conversation.id so you can reconstruct a single session end-to-end.

EventEmitted whenKey attributes
codex.user_promptUser submits a promptprompt.length, conversation.id
codex.websocket_requestCodex opens a model requestmodel, endpoint, duration_ms
codex.websocket_eventStreaming server event arrivesevent.kind, success, duration_ms
codex.sse_eventModel SSE chunk receivedinput_token_count, output_token_count, cached_token_count, reasoning_token_count
codex.tool_resultTool invocation completestool.name, success, duration_ms

Metrics

MetricUnitKey attributes
codex.thread.startedcountoriginator, model
codex.conversation.turn.countcountmodel, slug
codex.turn.e2e_duration_mshistogram (ms)model, success
codex.turn.ttft.duration_mshistogram (ms)model — time-to-first-token
codex.turn.ttfm.duration_mshistogram (ms)model — time-to-first-message
codex.turn.token_usagehistogramtype (input/output/cached/reasoning/tool)
codex.turn.tool.callhistogramtool.name
codex.turn.network_proxycountmode
codex.tool.callcounttool.name
codex.tool.call.duration_mshistogram (ms)tool.name
codex.tool.unified_execcountcommand.kind
codex.websocket.requestcountendpoint
codex.websocket.request.duration_mshistogram (ms)endpoint
codex.websocket.eventcountevent.kind
codex.mcp.tools.list.duration_mshistogram (ms)server.name
codex.mcp.tools.cache_write.duration_mshistogram (ms)server.name
codex.mcp.tools.fetch_uncached.duration_mshistogram (ms)server.name
codex.startup_prewarm.duration_mshistogram (ms)kind
codex.remote_models.load_cache.duration_mshistogram (ms)standard attributes
codex.plugins.startup_synccountstatus, transport
codex.shell_snapshot.duration_mshistogram (ms)standard attributes

Prerequisites

  1. Last9 account — Sign up at app.last9.io
  2. Codex CLI — Install via npm install -g @openai/codex or brew install codex
  3. OTLP credentials — Get your endpoint and auth header from Integrations → OpenTelemetry

Setup

Codex configures OpenTelemetry through TOML at ~/.codex/config.toml. Three exporter blocks are configured separately — one for logs (exporter), one for traces (trace_exporter), and one for metrics (metrics_exporter).

  1. Get your Last9 OTLP credentials

    Navigate to Integrations → OpenTelemetry in your Last9 dashboard. Copy:

    • OTLP Endpoint (e.g., https://otlp.last9.io or a regional variant like https://otlp-aps1.last9.io:443)
    • Authorization header (e.g., Basic <base64-token>)
  2. Add the OTel block to ~/.codex/config.toml

    Codex’s HTTP exporter requires signal-specific endpoint paths (/v1/logs, /v1/traces, /v1/metrics). Append the following to your existing ~/.codex/config.toml:

    # Required at the top level for metrics to flow.
    analytics_enabled = true
    [otel]
    environment = "dev"
    log_user_prompt = false
    [otel.exporter.otlp-http]
    endpoint = "https://<your-last9-otlp-endpoint>/v1/logs"
    protocol = "binary"
    headers = { Authorization = "Basic <your-last9-auth-token>" }
    [otel.trace_exporter.otlp-http]
    endpoint = "https://<your-last9-otlp-endpoint>/v1/traces"
    protocol = "binary"
    headers = { Authorization = "Basic <your-last9-auth-token>" }
    [otel.metrics_exporter.otlp-http]
    endpoint = "https://<your-last9-otlp-endpoint>/v1/metrics"
    protocol = "binary"
    headers = { Authorization = "Basic <your-last9-auth-token>" }
    [otel.span_attributes]
    "team" = "<your-team>"
  3. Start a Codex session

    codex "summarize what this repo does"

    Or run non-interactively:

    codex exec "explain this file"
    • Logs and traces flush within a few seconds of each event
    • Metrics flush every 60 seconds by default and on shutdown
  4. Verify data is arriving

    • Traces — open Traces in Last9, filter by service.name = codex_exec (or codex_tui for interactive sessions)
    • Metrics — open Metrics, search for codex_turn_token_usage_sum or codex_tool_call_total
    • Logs — open Logs, filter by service.name = codex_exec

Configuration reference

Top-level

KeyDefaultDescription
analytics_enabledfalseRequired true to enable the metrics exporter

[otel]

KeyDefaultDescription
environmentdevEnvironment tag (dev, staging, prod, etc.)
log_user_promptfalseIf true, includes the full prompt text in logs
span_attributes{}Map of resource attributes added to every span
tracestate{}Member fields upserted into W3C tracestate

Exporter blocks

Three exporter blocks share the same shape: [otel.exporter] for logs, [otel.trace_exporter] for traces, [otel.metrics_exporter] for metrics.

Each can be one of:

  • OTLP HTTP[otel.<exporter>.otlp-http] with endpoint, protocol (binary or json), and headers
  • OTLP gRPC[otel.<exporter>.otlp-grpc] with endpoint and headers
  • None<exporter> = "none" to disable
  • Statsig<exporter> = "statsig" (Codex internal default for metrics)

What you can do in Last9

Turn-level latency tracking (metrics)

codex.turn.ttft.duration_ms and codex.turn.ttfm.duration_ms capture time-to-first-token and time-to-first-message from the model. Plot p95/p99 over time to detect model degradation:

histogram_quantile(0.95,
sum by (le, model) (rate(codex_turn_ttft_duration_ms_milliseconds_bucket[5m]))
)

Token efficiency (metrics)

codex.turn.token_usage is a histogram broken down by type (input, output, cached, reasoning, tool). Compare cached versus input tokens to measure prompt cache efficiency.

Tool call latency (metrics)

codex.tool.call.duration_ms segmented by tool.name shows which tools dominate session time. Useful for spotting slow MCP servers or shell-heavy sessions.

MCP server health (metrics)

codex.mcp.tools.list.duration_ms, codex.mcp.tools.cache_write.duration_ms, and codex.mcp.tools.fetch_uncached.duration_ms reveal which MCP servers are slow or thrashing the tool cache.

Session replay via conversation.id (logs + traces)

Every log and span carries a conversation.id. Filter by it to reconstruct the full session sequence:

user_prompt → websocket_request → websocket_event → sse_event → tool_result → ...

Error rate monitoring (logs + alerts)

codex.websocket_event events with success = false flag failed model calls. Create a Last9 alert on the rate of failures to catch upstream model degradation early.

Team-level tagging

Tag sessions by team or project via [otel.span_attributes]:

[otel.span_attributes]
"team" = "platform"
"project" = "infra-agent"

All spans from that session carry the labels, enabling per-team breakdowns in Last9.


Troubleshooting

  • No data in Last9

    • Confirm analytics_enabled = true is at the top level of config.toml, not nested under [otel]
    • Verify each exporter endpoint includes the signal-specific path (/v1/logs, /v1/traces, /v1/metrics) — Codex does not append it
    • Check that the auth header value starts with Basic and has no extra quotes
  • Traces and logs flow but metrics are missing

    • The default metrics_exporter is statsig. Set [otel.metrics_exporter.otlp-http] (or otlp-grpc) explicitly
    • Confirm analytics_enabled = true — without it, metrics_exporter is forcibly disabled
    • Metrics flush every 60 seconds; wait at least 90 seconds before checking
    • Metric names in Last9 use underscores: codex.tool.call becomes codex_tool_call_total
  • Service name appears as codex_exec or codex_tui instead of codex

    • This is by design — Codex sets service.name from the running CLI binary (codex_exec for codex exec, codex_tui for interactive codex). Use a regex filter (service.name =~ "codex_.*") in Last9 to cover both.
  • Startup warnings about invalid otel.span_attributes or otel.tracestate

    • Codex logs these at startup and ignores invalid entries. Fix the keys or values to silence
  • 401 / authentication errors

    • Verify the header format: Authorization = "Basic <token>" (no Bearer prefix, no trailing whitespace)
    • Regenerate the token from Integrations → OpenTelemetry if it has expired

Please get in touch with us on Discord or Email if you have any questions.