OpenAI Codex CLI
Send OpenAI Codex CLI session telemetry — prompts, tool calls, model latency, and turn metrics — from `codex exec` and the interactive `codex` TUI to Last9 via OpenTelemetry.
OpenAI Codex emits structured OpenTelemetry data for every developer session: prompts, tool invocations, model API calls, turn-level latency, and token usage. Routing this data to Last9 lets you analyze AI usage patterns, audit tool decisions, track per-session latency, and alert on error rates — within your existing observability stack.
Codex exports three OpenTelemetry signal types:
- Logs — structured events for prompts, websocket activity, model SSE events, and errors
- Metrics — counters and histograms for sessions, turns, tool calls, MCP latency, and token usage
- Traces — spans covering session lifecycle, model calls, tool dispatch, and MCP requests
What gets exported
Logs (events)
Each Codex session emits structured events under service.name = codex_exec (or codex_tui for the interactive TUI). Events share a conversation.id so you can reconstruct a single session end-to-end.
| Event | Emitted when | Key attributes |
|---|---|---|
codex.user_prompt | User submits a prompt | prompt.length, conversation.id |
codex.websocket_request | Codex opens a model request | model, endpoint, duration_ms |
codex.websocket_event | Streaming server event arrives | event.kind, success, duration_ms |
codex.sse_event | Model SSE chunk received | input_token_count, output_token_count, cached_token_count, reasoning_token_count |
codex.tool_result | Tool invocation completes | tool.name, success, duration_ms |
Metrics
| Metric | Unit | Key attributes |
|---|---|---|
codex.thread.started | count | originator, model |
codex.conversation.turn.count | count | model, slug |
codex.turn.e2e_duration_ms | histogram (ms) | model, success |
codex.turn.ttft.duration_ms | histogram (ms) | model — time-to-first-token |
codex.turn.ttfm.duration_ms | histogram (ms) | model — time-to-first-message |
codex.turn.token_usage | histogram | type (input/output/cached/reasoning/tool) |
codex.turn.tool.call | histogram | tool.name |
codex.turn.network_proxy | count | mode |
codex.tool.call | count | tool.name |
codex.tool.call.duration_ms | histogram (ms) | tool.name |
codex.tool.unified_exec | count | command.kind |
codex.websocket.request | count | endpoint |
codex.websocket.request.duration_ms | histogram (ms) | endpoint |
codex.websocket.event | count | event.kind |
codex.mcp.tools.list.duration_ms | histogram (ms) | server.name |
codex.mcp.tools.cache_write.duration_ms | histogram (ms) | server.name |
codex.mcp.tools.fetch_uncached.duration_ms | histogram (ms) | server.name |
codex.startup_prewarm.duration_ms | histogram (ms) | kind |
codex.remote_models.load_cache.duration_ms | histogram (ms) | standard attributes |
codex.plugins.startup_sync | count | status, transport |
codex.shell_snapshot.duration_ms | histogram (ms) | standard attributes |
Prerequisites
- Last9 account — Sign up at app.last9.io
- Codex CLI — Install via
npm install -g @openai/codexorbrew install codex - OTLP credentials — Get your endpoint and auth header from Integrations → OpenTelemetry
Setup
Codex configures OpenTelemetry through TOML at ~/.codex/config.toml. Three exporter blocks are configured separately — one for logs (exporter), one for traces (trace_exporter), and one for metrics (metrics_exporter).
-
Get your Last9 OTLP credentials
Navigate to Integrations → OpenTelemetry in your Last9 dashboard. Copy:
- OTLP Endpoint (e.g.,
https://otlp.last9.ioor a regional variant likehttps://otlp-aps1.last9.io:443) - Authorization header (e.g.,
Basic <base64-token>)
- OTLP Endpoint (e.g.,
-
Add the OTel block to
~/.codex/config.tomlCodex’s HTTP exporter requires signal-specific endpoint paths (
/v1/logs,/v1/traces,/v1/metrics). Append the following to your existing~/.codex/config.toml:# Required at the top level for metrics to flow.analytics_enabled = true[otel]environment = "dev"log_user_prompt = false[otel.exporter.otlp-http]endpoint = "https://<your-last9-otlp-endpoint>/v1/logs"protocol = "binary"headers = { Authorization = "Basic <your-last9-auth-token>" }[otel.trace_exporter.otlp-http]endpoint = "https://<your-last9-otlp-endpoint>/v1/traces"protocol = "binary"headers = { Authorization = "Basic <your-last9-auth-token>" }[otel.metrics_exporter.otlp-http]endpoint = "https://<your-last9-otlp-endpoint>/v1/metrics"protocol = "binary"headers = { Authorization = "Basic <your-last9-auth-token>" }[otel.span_attributes]"team" = "<your-team>" -
Start a Codex session
codex "summarize what this repo does"Or run non-interactively:
codex exec "explain this file"- Logs and traces flush within a few seconds of each event
- Metrics flush every 60 seconds by default and on shutdown
-
Verify data is arriving
- Traces — open Traces in Last9, filter by
service.name = codex_exec(orcodex_tuifor interactive sessions) - Metrics — open Metrics, search for
codex_turn_token_usage_sumorcodex_tool_call_total - Logs — open Logs, filter by
service.name = codex_exec
- Traces — open Traces in Last9, filter by
Configuration reference
Top-level
| Key | Default | Description |
|---|---|---|
analytics_enabled | false | Required true to enable the metrics exporter |
[otel]
| Key | Default | Description |
|---|---|---|
environment | dev | Environment tag (dev, staging, prod, etc.) |
log_user_prompt | false | If true, includes the full prompt text in logs |
span_attributes | {} | Map of resource attributes added to every span |
tracestate | {} | Member fields upserted into W3C tracestate |
Exporter blocks
Three exporter blocks share the same shape: [otel.exporter] for logs, [otel.trace_exporter] for traces, [otel.metrics_exporter] for metrics.
Each can be one of:
- OTLP HTTP —
[otel.<exporter>.otlp-http]withendpoint,protocol(binaryorjson), andheaders - OTLP gRPC —
[otel.<exporter>.otlp-grpc]withendpointandheaders - None —
<exporter> = "none"to disable - Statsig —
<exporter> = "statsig"(Codex internal default for metrics)
What you can do in Last9
Turn-level latency tracking (metrics)
codex.turn.ttft.duration_ms and codex.turn.ttfm.duration_ms capture time-to-first-token and time-to-first-message from the model. Plot p95/p99 over time to detect model degradation:
histogram_quantile(0.95, sum by (le, model) (rate(codex_turn_ttft_duration_ms_milliseconds_bucket[5m])))Token efficiency (metrics)
codex.turn.token_usage is a histogram broken down by type (input, output, cached, reasoning, tool). Compare cached versus input tokens to measure prompt cache efficiency.
Tool call latency (metrics)
codex.tool.call.duration_ms segmented by tool.name shows which tools dominate session time. Useful for spotting slow MCP servers or shell-heavy sessions.
MCP server health (metrics)
codex.mcp.tools.list.duration_ms, codex.mcp.tools.cache_write.duration_ms, and codex.mcp.tools.fetch_uncached.duration_ms reveal which MCP servers are slow or thrashing the tool cache.
Session replay via conversation.id (logs + traces)
Every log and span carries a conversation.id. Filter by it to reconstruct the full session sequence:
user_prompt → websocket_request → websocket_event → sse_event → tool_result → ...Error rate monitoring (logs + alerts)
codex.websocket_event events with success = false flag failed model calls. Create a Last9 alert on the rate of failures to catch upstream model degradation early.
Team-level tagging
Tag sessions by team or project via [otel.span_attributes]:
[otel.span_attributes]"team" = "platform""project" = "infra-agent"All spans from that session carry the labels, enabling per-team breakdowns in Last9.
Troubleshooting
-
No data in Last9
- Confirm
analytics_enabled = trueis at the top level ofconfig.toml, not nested under[otel] - Verify each exporter endpoint includes the signal-specific path (
/v1/logs,/v1/traces,/v1/metrics) — Codex does not append it - Check that the auth header value starts with
Basicand has no extra quotes
- Confirm
-
Traces and logs flow but metrics are missing
- The default
metrics_exporterisstatsig. Set[otel.metrics_exporter.otlp-http](orotlp-grpc) explicitly - Confirm
analytics_enabled = true— without it,metrics_exporteris forcibly disabled - Metrics flush every 60 seconds; wait at least 90 seconds before checking
- Metric names in Last9 use underscores:
codex.tool.callbecomescodex_tool_call_total
- The default
-
Service name appears as
codex_execorcodex_tuiinstead ofcodex- This is by design — Codex sets
service.namefrom the running CLI binary (codex_execforcodex exec,codex_tuifor interactivecodex). Use a regex filter (service.name =~ "codex_.*") in Last9 to cover both.
- This is by design — Codex sets
-
Startup warnings about invalid
otel.span_attributesorotel.tracestate- Codex logs these at startup and ignores invalid entries. Fix the keys or values to silence
-
401 / authentication errors
- Verify the header format:
Authorization = "Basic <token>"(noBearerprefix, no trailing whitespace) - Regenerate the token from Integrations → OpenTelemetry if it has expired
- Verify the header format:
Please get in touch with us on Discord or Email if you have any questions.