GitHub Copilot
Send GitHub Copilot agent telemetry — LLM calls, tool executions, token usage, and latency — to Last9 via OpenTelemetry.
The GitHub Copilot SDK lets you build custom AI agents that run inside VS Code. These agents emit structured OpenTelemetry telemetry for every invocation: LLM calls, tool executions, token usage, and latency. Routing this data to Last9 gives you full visibility into your agents’ performance and cost — without any custom instrumentation code.
The Copilot SDK follows OpenTelemetry GenAI semantic conventions, so spans and metrics use standardized attribute names compatible with any OTel backend.
What gets exported
Traces
The SDK emits a hierarchical span tree for each agent invocation. All spans follow the gen_ai.* semantic convention.
| Span | Description | Key attributes |
|---|---|---|
invoke_agent | Root span wrapping the entire agent turn | gen_ai.operation.name, gen_ai.provider.name, error.type |
chat | Single LLM API call within the agent | gen_ai.request.model, gen_ai.usage.input_tokens, gen_ai.usage.output_tokens, gen_ai.response.finish_reasons |
execute_tool | Single tool invocation | gen_ai.tool.name, gen_ai.tool.call.id, error.type |
Prompt and response content is not captured by default. Set captureContent: true to opt in — see Configuration reference.
Metrics
| Metric | Type | Description |
|---|---|---|
gen_ai.client.operation.duration | Histogram | LLM API call latency |
gen_ai.client.token.usage | Counter | Input and output token counts |
copilot_chat.tool.call.count | Counter | Number of tool invocations |
copilot_chat.tool.call.duration | Histogram | Tool execution latency |
copilot_chat.agent.invocation.duration | Histogram | End-to-end agent turn latency |
copilot_chat.agent.turn.count | Counter | Total agent turns |
copilot_chat.session.count | Counter | Total sessions started |
copilot_chat.time_to_first_token | Histogram | Streaming TTFT (streaming only) |
Prerequisites
- Last9 account — Sign up at app.last9.io
- GitHub Copilot subscription — Individual, Business, or Enterprise plan
- GitHub token — A user-level token with a Copilot subscription behind it:
- Fine-grained PAT (
github_pat_*) — easiest for local dev - GitHub App user access token (
ghu_*) — recommended for production; use a GitHub App that performs the OAuth user authorization flow on behalf of a Copilot user - Classic PATs (
ghp_*) and GitHub App installation tokens are not accepted — Copilot is billed per-user and requires a user-level token
- Fine-grained PAT (
- Copilot SDK — Installed in your agent project (Node.js, Python, Go, or .NET)
- OTLP credentials — Get your endpoint and auth header from Integrations → OpenTelemetry
Setup
-
Get your Last9 OTLP credentials
Navigate to Integrations → OpenTelemetry in your Last9 dashboard. Copy:
- OTLP Endpoint (e.g.,
https://otlp-aps1.last9.io:443) - Authorization header (e.g.,
Basic <base64-token>)
- OTLP Endpoint (e.g.,
-
Install the SDK with telemetry support
npm install @github/copilot-sdkpip install "copilot-sdk[telemetry]"go get github.com/github/copilot-sdk-godotnet add package GitHub.Copilot.SDK -
Configure telemetry in your agent
Pass a
TelemetryConfigwhen constructing theCopilotClient. Set theotlpEndpointto your Last9 OTLP endpoint and include your authorization header.import { CopilotClient } from "@github/copilot-sdk";const client = new CopilotClient({telemetry: {exporterType: "otlp-http",otlpEndpoint: "https://<your-last9-otlp-endpoint>",sourceName: "my-copilot-agent",captureContent: false, // set true to include prompts/responses},});To add the
Authorizationheader, set the standard OTel environment variable before starting your agent:export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic <your-last9-auth-token>"export OTEL_SERVICE_NAME="my-copilot-agent"from copilot_sdk import CopilotClient, SubprocessConfigclient = CopilotClient(SubprocessConfig(telemetry={"exporter_type": "otlp-http","otlp_endpoint": "https://<your-last9-otlp-endpoint>","source_name": "my-copilot-agent","capture_content": False,}))export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic <your-last9-auth-token>"export OTEL_SERVICE_NAME="my-copilot-agent"import copilot "github.com/github/copilot-sdk-go"client, err := copilot.NewClient(copilot.ClientConfig{Telemetry: &copilot.TelemetryConfig{ExporterType: "otlp-http",OTLPEndpoint: "https://<your-last9-otlp-endpoint>",SourceName: "my-copilot-agent",CaptureContent: false,},})export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic <your-last9-auth-token>"export OTEL_SERVICE_NAME="my-copilot-agent"using GitHub.Copilot.SDK;var client = new CopilotClient(new ClientConfig{Telemetry = new TelemetryConfig{ExporterType = "otlp-http",OtlpEndpoint = "https://<your-last9-otlp-endpoint>",SourceName = "my-copilot-agent",CaptureContent = false,}});export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Basic <your-last9-auth-token>"export OTEL_SERVICE_NAME="my-copilot-agent" -
Invoke your agent and verify data
Run your agent and trigger at least one interaction. Then in Last9:
- Traces — navigate to the APM section, filter by
service.name = my-copilot-agent, and look forinvoke_agentroot spans - Metrics — navigate to Metrics, search for
copilot_chat_agent_invocation_duration
- Traces — navigate to the APM section, filter by
Configuration reference
TelemetryConfig fields
| Field | Type | Default | Description |
|---|---|---|---|
exporterType | string | "otlp-http" | "otlp-http" to send to Last9; "file" to write JSON-lines locally |
otlpEndpoint | string | — | Last9 OTLP endpoint (HTTP, port 443 or 4318) |
filePath | string | — | Output path when exporterType is "file" |
sourceName | string | — | Instrumentation scope name — appears as the tracer name in Last9 |
captureContent | boolean | false | Include prompt text and model responses in spans |
Environment variables
These standard OTel variables are respected by the Copilot SDK and work alongside TelemetryConfig:
| Variable | Description |
|---|---|
OTEL_EXPORTER_OTLP_HEADERS | Authentication header: Authorization=Basic <token> |
OTEL_SERVICE_NAME | Service name tag on all signals (default: "copilot-chat") |
OTEL_RESOURCE_ATTRIBUTES | Additional resource tags, e.g. team=platform,env=prod |
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT | Set true as an alternative to captureContent: true |
What you can do in Last9
Agent latency breakdown (traces)
Each invoke_agent span contains child chat and execute_tool spans. In Last9 APM, expand a trace to see exactly how much time was spent on LLM calls versus tool execution within a single agent turn — useful for identifying slow tools or high-latency models.
Token and cost tracking (metrics)
gen_ai.client.token.usage is broken down by gen_ai.request.model. Use it to:
- Compare token consumption across different models
- Track usage trends over time
- Alert when token consumption spikes unexpectedly
Tool performance (metrics)
copilot_chat.tool.call.duration and copilot_chat.tool.call.count let you measure which tools are called most frequently and which are slowest — useful for optimizing agent prompts and tool implementations.
Error rate monitoring (traces)
Failed LLM calls and tool executions set error.type on their spans. Create a Last9 alert on error rate for invoke_agent spans to catch Copilot API degradation or tool failures before users report them.
Team tagging
Use OTEL_RESOURCE_ATTRIBUTES to tag agents by team or environment:
export OTEL_RESOURCE_ATTRIBUTES="team=platform,deployment.environment=production,project=my-agent"All signals from that agent carry these labels, enabling per-team breakdowns in Last9 dashboards.
Troubleshooting
-
No traces in Last9 after running the agent
- Confirm
exporterType: "otlp-http"— the SDK defaults to HTTP but double-check it is not set to"file" - Verify the
otlpEndpointvalue does not have a trailing path (e.g., usehttps://otlp-aps1.last9.io:443, nothttps://otlp-aps1.last9.io:443/v1/traces) - Check that
OTEL_EXPORTER_OTLP_HEADERSis exported in the same shell session where the agent runs
- Confirm
-
401 / authentication errors
- Verify the header format:
Authorization=Basic <token>(no extra quotes, noBearerprefix) - Regenerate the token from Integrations → OpenTelemetry if it has expired
- Verify the header format:
-
Spans appear but no metrics
- Metrics may take up to 60 seconds to flush after the first invocation — wait before checking
- Confirm the SDK version supports metrics export (check the SDK changelog for your language)
-
Content is missing from spans
- By default, prompts and responses are not captured — set
captureContent: trueinTelemetryConfigto enable
- By default, prompts and responses are not captured — set
Please get in touch with us on Discord or Email if you have any questions.