Skip to content
Last9
Book demo

GitHub Copilot

Send GitHub Copilot agent telemetry — LLM calls, tool executions, token usage, and latency — to Last9 via OpenTelemetry.

The GitHub Copilot SDK lets you build custom AI agents that run inside VS Code. These agents emit structured OpenTelemetry telemetry for every invocation: LLM calls, tool executions, token usage, and latency. Routing this data to Last9 gives you full visibility into your agents’ performance and cost — without any custom instrumentation code.

The Copilot SDK follows OpenTelemetry GenAI semantic conventions, so spans and metrics use standardized attribute names compatible with any OTel backend.

What gets exported

Traces

The SDK emits a hierarchical span tree for each agent invocation. All spans follow the gen_ai.* semantic convention.

SpanDescriptionKey attributes
invoke_agentRoot span wrapping the entire agent turngen_ai.operation.name, gen_ai.provider.name, error.type
chatSingle LLM API call within the agentgen_ai.request.model, gen_ai.usage.input_tokens, gen_ai.usage.output_tokens, gen_ai.response.finish_reasons
execute_toolSingle tool invocationgen_ai.tool.name, gen_ai.tool.call.id, error.type

Prompt and response content is not captured by default. Set captureContent: true to opt in — see Configuration reference.

Metrics

MetricTypeDescription
gen_ai.client.operation.durationHistogramLLM API call latency
gen_ai.client.token.usageCounterInput and output token counts
copilot_chat.tool.call.countCounterNumber of tool invocations
copilot_chat.tool.call.durationHistogramTool execution latency
copilot_chat.agent.invocation.durationHistogramEnd-to-end agent turn latency
copilot_chat.agent.turn.countCounterTotal agent turns
copilot_chat.session.countCounterTotal sessions started
copilot_chat.time_to_first_tokenHistogramStreaming TTFT (streaming only)

Prerequisites

  1. Last9 account — Sign up at app.last9.io
  2. GitHub Copilot subscription — Individual, Business, or Enterprise plan
  3. GitHub token — A user-level token with a Copilot subscription behind it:
    • Fine-grained PAT (github_pat_*) — easiest for local dev
    • GitHub App user access token (ghu_*) — recommended for production; use a GitHub App that performs the OAuth user authorization flow on behalf of a Copilot user
    • Classic PATs (ghp_*) and GitHub App installation tokens are not accepted — Copilot is billed per-user and requires a user-level token
  4. Copilot SDK — Installed in your agent project (Node.js, Python, Go, or .NET)
  5. OTLP credentials — Get your endpoint and auth header from Integrations → OpenTelemetry

Setup

  1. Get your Last9 OTLP credentials

    Navigate to Integrations → OpenTelemetry in your Last9 dashboard. Copy:

    • OTLP Endpoint (e.g., https://otlp-aps1.last9.io:443)
    • Authorization header (e.g., Basic <base64-token>)
  2. Install the SDK with telemetry support

    npm install @github/copilot-sdk
  3. Configure telemetry in your agent

    Pass a TelemetryConfig when constructing the CopilotClient. Set the otlpEndpoint to your Last9 OTLP endpoint and include your authorization header.

    import { CopilotClient } from "@github/copilot-sdk";
    const client = new CopilotClient({
    telemetry: {
    exporterType: "otlp-http",
    otlpEndpoint: "https://<your-last9-otlp-endpoint>",
    sourceName: "my-copilot-agent",
    captureContent: false, // set true to include prompts/responses
    },
    });

    To add the Authorization header, set the standard OTel environment variable before starting your agent:

    export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic <your-last9-auth-token>"
    export OTEL_SERVICE_NAME="my-copilot-agent"
  4. Invoke your agent and verify data

    Run your agent and trigger at least one interaction. Then in Last9:

    • Traces — navigate to the APM section, filter by service.name = my-copilot-agent, and look for invoke_agent root spans
    • Metrics — navigate to Metrics, search for copilot_chat_agent_invocation_duration

Configuration reference

TelemetryConfig fields

FieldTypeDefaultDescription
exporterTypestring"otlp-http""otlp-http" to send to Last9; "file" to write JSON-lines locally
otlpEndpointstringLast9 OTLP endpoint (HTTP, port 443 or 4318)
filePathstringOutput path when exporterType is "file"
sourceNamestringInstrumentation scope name — appears as the tracer name in Last9
captureContentbooleanfalseInclude prompt text and model responses in spans

Environment variables

These standard OTel variables are respected by the Copilot SDK and work alongside TelemetryConfig:

VariableDescription
OTEL_EXPORTER_OTLP_HEADERSAuthentication header: Authorization=Basic <token>
OTEL_SERVICE_NAMEService name tag on all signals (default: "copilot-chat")
OTEL_RESOURCE_ATTRIBUTESAdditional resource tags, e.g. team=platform,env=prod
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENTSet true as an alternative to captureContent: true

What you can do in Last9

Agent latency breakdown (traces)

Each invoke_agent span contains child chat and execute_tool spans. In Last9 APM, expand a trace to see exactly how much time was spent on LLM calls versus tool execution within a single agent turn — useful for identifying slow tools or high-latency models.

Token and cost tracking (metrics)

gen_ai.client.token.usage is broken down by gen_ai.request.model. Use it to:

  • Compare token consumption across different models
  • Track usage trends over time
  • Alert when token consumption spikes unexpectedly

Tool performance (metrics)

copilot_chat.tool.call.duration and copilot_chat.tool.call.count let you measure which tools are called most frequently and which are slowest — useful for optimizing agent prompts and tool implementations.

Error rate monitoring (traces)

Failed LLM calls and tool executions set error.type on their spans. Create a Last9 alert on error rate for invoke_agent spans to catch Copilot API degradation or tool failures before users report them.

Team tagging

Use OTEL_RESOURCE_ATTRIBUTES to tag agents by team or environment:

export OTEL_RESOURCE_ATTRIBUTES="team=platform,deployment.environment=production,project=my-agent"

All signals from that agent carry these labels, enabling per-team breakdowns in Last9 dashboards.


Troubleshooting

  • No traces in Last9 after running the agent

    • Confirm exporterType: "otlp-http" — the SDK defaults to HTTP but double-check it is not set to "file"
    • Verify the otlpEndpoint value does not have a trailing path (e.g., use https://otlp-aps1.last9.io:443, not https://otlp-aps1.last9.io:443/v1/traces)
    • Check that OTEL_EXPORTER_OTLP_HEADERS is exported in the same shell session where the agent runs
  • 401 / authentication errors

    • Verify the header format: Authorization=Basic <token> (no extra quotes, no Bearer prefix)
    • Regenerate the token from Integrations → OpenTelemetry if it has expired
  • Spans appear but no metrics

    • Metrics may take up to 60 seconds to flush after the first invocation — wait before checking
    • Confirm the SDK version supports metrics export (check the SDK changelog for your language)
  • Content is missing from spans

    • By default, prompts and responses are not captured — set captureContent: true in TelemetryConfig to enable

Please get in touch with us on Discord or Email if you have any questions.