A Complete Guide to Integrating OpenTelemetry with FastAPI
Learn how to integrate OpenTelemetry with FastAPI for enhanced observability, including automatic instrumentation, environment variables, and custom exporters.
FastAPI is one of the most popular frameworks for building APIs, known for its speed, ease of use, and performance.
When you combine FastAPI with OpenTelemetry, you gain enhanced observability for your application, allowing you to monitor, trace, and debug your services with ease.
In this guide, we’ll walk you through how to integrate OpenTelemetry with FastAPI, providing you with insights into both performance monitoring and error tracing.
What is FastAPI?
FastAPI is a modern web framework for building APIs with Python. It’s designed to be fast and efficient, supporting asynchronous programming to handle high-performance applications.
It comes with automatic OpenAPI and JSON Schema generation, making it developer-friendly and easy to document your APIs.
As a developer, one of the key challenges when building APIs is ensuring smooth performance, identifying bottlenecks, and tracking down bugs. That’s where OpenTelemetry comes in.
What is OpenTelemetry?
OpenTelemetry is an open-source framework for collecting observability data (such as metrics, logs, and traces) from applications.
It standardizes the process of generating, collecting, and exporting telemetry data across various systems.
With OpenTelemetry, you can get a clear view of how your application is performing, which parts are slow, and where issues arise, even in complex distributed systems.
Why Use OpenTelemetry with FastAPI?
FastAPI makes it easy to build fast, efficient APIs, but it doesn’t natively include tools for monitoring and tracing.
OpenTelemetry fills this gap by providing powerful tools for:
Distributed Tracing: Track the journey of requests as they move through different services and components, helping you understand system behavior.
Metrics Collection: Collect and export key metrics to understand performance, such as request duration, throughput, and error rates.
Logging Integration: Correlate logs with traces and metrics for better debugging and troubleshooting.
Integrating OpenTelemetry with FastAPI allows you to monitor your FastAPI-based application in real-time, improving reliability and helping you spot performance issues early.
Step-by-Step Guide to Integrating OpenTelemetry with FastAPI
In this section, we’ll walk you through the steps to integrate OpenTelemetry with a FastAPI application.
Step 1: Install Required Libraries
Before you start integrating OpenTelemetry, make sure you have FastAPI and the necessary OpenTelemetry libraries installed in your environment. You can do this by running the following:
This will install FastAPI, OpenTelemetry, and the necessary exporters for sending telemetry data.
Step 2: Initialize OpenTelemetry
To begin using OpenTelemetry with FastAPI, you’ll need to initialize OpenTelemetry SDK and set up a tracer provider. This is done by configuring the OpenTelemetry SDK and setting an exporter to send the traces and metrics.
In your main.py (or the entry point for your FastAPI app), add the following code:
from fastapi import FastAPI
from opentelemetry import trace
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchExportSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
# Initialize OpenTelemetry
trace.set_tracer_provider(TracerProvider())
tracer = trace.get_tracer(__name__)
# Set up OTLP exporter
otlp_exporter = OTLPSpanExporter(endpoint="your-otlp-endpoint")
span_processor = BatchExportSpanProcessor(otlp_exporter)
trace.get_tracer_provider().add_span_processor(span_processor)
# Initialize FastAPI app
app = FastAPI()
# Instrument FastAPI with OpenTelemetry
FastAPIInstrumentor.instrument_app(app)
In the code above:
We set up the OpenTelemetry tracer provider.
We configure the OTLP exporter to send data to a specific endpoint.
We use the FastAPIInstrumentor to automatically capture traces from FastAPI routes.
Step 3: Define FastAPI Endpoints
Now, you can define your FastAPI routes. OpenTelemetry will automatically generate traces for each request hitting these endpoints. Here’s an example of a simple FastAPI application with a couple of endpoints:
from fastapi import FastAPI
from opentelemetry import trace
app = FastAPI()
@app.get("/")
async def root():
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("root_request"):
return {"message": "Hello, FastAPI with OpenTelemetry!"}
@app.get("/user/{user_id}")
async def read_user(user_id: int):
tracer = trace.get_tracer(__name__)
with tracer.start_as_current_span("read_user_request"):
return {"user_id": user_id}
In this example:
Each endpoint creates a trace span, marking the execution of that request.
The tracer.start_as_current_span method automatically associates the span with the current execution context.
Step 4: Export Traces to Your Backend
Once traces are collected, they need to be exported to a backend for analysis and visualization. OpenTelemetry supports several exporters, including:
OTLP Exporter (for exporting traces and metrics to services like Honeycomb, OpenSearch, and others)
Prometheus Exporter (for metrics)
Jaeger or Zipkin (for distributed tracing)
In the example above, we’ve configured the OTLP exporter, but you can replace it with any exporter that fits your needs.
Step 5: View Your Traces and Metrics
Once the application is running, you can view the telemetry data in your backend. If you're using an OTLP-compatible service, you’ll be able to see the performance metrics and traces from your FastAPI application in real-time.
This makes it easy to identify slow endpoints, monitor overall system health, and get detailed context for debugging.
Configuring Environment Variables for Monitoring with OpenTelemetry
When setting up OpenTelemetry for monitoring your application, you often need to configure environment variables to control how telemetry data is collected, processed, and exported.
Environment variables are a powerful way to manage configurations without hardcoding them into your application. Below are some detailed examples of configuring environment variables for monitoring with OpenTelemetry.
1. Setting the Tracer Provider and Exporter Configuration
OpenTelemetry allows you to specify a tracer provider and exporter through environment variables.
These variables define how traces and other telemetry data are handled, including where they are sent.
Example: Setting the OTLP Exporter
The OTLP Exporter is commonly used for exporting traces and metrics to a backend like Prometheus, Jaeger, or any other OTLP-compatible service. You can configure the OTLP exporter using the following environment variables:
# Set the endpoint for the OTLP exporter (e.g., your OTLP backend URL)
export OTEL_EXPORTER_OTLP_ENDPOINT="http://your-otlp-endpoint:4317"
# Set the exporter’s service name (this helps in identifying the source of traces)
export OTEL_SERVICE_NAME="my-fastapi-service"
# Optional: Set authentication credentials if needed (for example, API keys or token-based authentication)
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer your-api-token"
# Set the trace export timeout (in seconds)
export OTEL_EXPORTER_OTLP_TIMEOUT=5
These environment variables configure OpenTelemetry to export traces to a specified endpoint.
The OTEL_EXPORTER_OTLP_ENDPOINT is particularly important because it tells OpenTelemetry where to send the trace data (e.g., to your OTLP endpoint, which could be a monitoring service like Honeycomb or OpenSearch).
Example: Jaeger Exporter
If you’re using Jaeger for distributed tracing, you can configure it similarly by setting the relevant environment variables:
# Set the endpoint for the Jaeger exporter (usually a Jaeger agent or collector)
export OTEL_EXPORTER_JAEGER_AGENT_HOST="localhost"
# Specify the port where Jaeger is listening (default is 5775)
export OTEL_EXPORTER_JAEGER_AGENT_PORT="5775"
# Set the trace sampling rate
export OTEL_TRACES_SAMPLER="parentbased_always_on"
In this case, the traces are sent to a Jaeger collector running on localhost on port 5775. The sampling rate (OTEL_TRACES_SAMPLER) determines how often traces are collected, with parentbased_always_on meaning every trace will be captured.
2. Setting Up Sampling Rate
Sampling is an essential aspect of telemetry. By adjusting the sampling rate, you can control the volume of telemetry data being generated and transmitted. Sampling too much data can lead to performance overhead, while too little data might not provide sufficient insights.
Example: Configuring Trace Sampling Rate
You can control trace sampling with the OTEL_TRACES_SAMPLER environment variable. The available options include:
always_on: Collect all traces.
always_off: Collect no traces.
parentbased_always_on: Collect traces for requests that have a parent trace.
parentbased_probability: Collect traces based on a sampling probability.
To set the sampling rate:
# Collect all traces
export OTEL_TRACES_SAMPLER="always_on"
# Collect traces based on a probability (e.g., 50% of traces)
export OTEL_TRACES_SAMPLER="parentbased_probability"
export OTEL_TRACES_SAMPLER_ARG="0.5"
3. Setting Up Metrics Collection
OpenTelemetry also allows you to collect and export metrics, such as request counts, latency, and error rates. These metrics are typically exported to a monitoring system like Prometheus.
Example: Configuring Prometheus Exporter
If you are using Prometheus to collect metrics, you can use the Prometheus exporter. The environment variables for configuring Prometheus might look like this:
# Enable the Prometheus exporter
export OTEL_EXPORTER_PROMETHEUS_PORT=9464
# Set the address of the Prometheus server
export OTEL_METRICS_EXPORTER="prometheus"
# Optional: Set additional metric labels to identify different environments
export OTEL_METRICS_EXPORTER_LABELS="env=production,app=my-fastapi-app"
In this example:
OTEL_EXPORTER_PROMETHEUS_PORT sets the port on which the Prometheus exporter will expose the metrics.
OTEL_METRICS_EXPORTER defines which exporter OpenTelemetry should use (Prometheus in this case).
You can also add custom labels to your metrics for better tracking in your monitoring system.
4. Configuring Log Exporters
OpenTelemetry provides support for logging as well. Logs can be exported to various backends such as Elasticsearch or even directly into a trace context.
Example: Configuring Log Exporter (via OTLP)
You can configure log exporting to an OTLP-compatible service using the following environment variables:
# Set the endpoint for log exporter (OTLP)
export OTEL_LOGS_EXPORTER="otlp"
export OTEL_EXPORTER_OTLP_ENDPOINT="http://your-otlp-endpoint:4317"
This configuration will send logs to the OTLP exporter, where you can integrate them with your tracing and metrics.
5. Configuring Service Name and Additional Metadata
It's important to configure the service name and any additional metadata you want to associate with your application’s telemetry data.
# Set the service name (important for distinguishing different services in a distributed system)
export OTEL_SERVICE_NAME="fastapi-app"
# Optional: Set the version of the application
export OTEL_SERVICE_VERSION="1.0.0"
# Optional: Set the environment (e.g., production, development)
export OTEL_ENVIRONMENT="production"
These environment variables help ensure that the telemetry data generated by your application is labeled appropriately, which makes it easier to filter and analyze later.
6. Additional Configuration for Debugging
OpenTelemetry also allows you to enable debug logs for troubleshooting your configuration.
This will print debug-level logs, helping you troubleshoot issues with your OpenTelemetry setup.
7. Running Your Application with Environment Variables
Once you’ve set up the necessary environment variables, you can start your application with these variables in place.
If you’re running your FastAPI app with uvicorn, you can set the environment variables in your terminal session or add them to a .env file (which can be loaded with libraries like python-dotenv).
# Run the FastAPI app with the configured environment variables
uvicorn main:app --reload
This command will launch your FastAPI application, and the OpenTelemetry configuration will ensure that telemetry data is collected and exported based on the environment variables you've set.
How to Use Automatic Instrumentation and Environment Variables with Otel
1. Automatic Instrumentation with Otel in FastAPI
OpenTelemetry offers automatic instrumentation for a variety of frameworks, including FastAPI.
This feature allows you to trace incoming requests, capture spans, and measure performance metrics without having to manually instrument each endpoint. With the opentelemetry-instrumentation-fastapi package, you can quickly set up OpenTelemetry to monitor your FastAPI application with minimal effort.
Why Automatic Instrumentation is Beneficial
Automatic instrumentation is a huge time-saver because it eliminates the need to add tracing code to each endpoint individually.
Instead, OpenTelemetry automatically hooks into FastAPI’s request lifecycle, allowing you to track each request from start to finish and gain valuable insights into your application’s behavior.
Steps for Automatic Instrumentation
Install the Necessary PackagesTo get started with automatic instrumentation in FastAPI, you first need to install the OpenTelemetry instrumentation package for FastAPI:
OpenTelemetry offers the FastAPIInstrumentor class to automatically handle the instrumentation. In your FastAPI application, simply add the following code:
from fastapi import FastAPI
from opentelemetry.instrumentation.fastapi import FastAPIInstrumentor
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchExportSpanProcessor
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
# Initialize OpenTelemetry
tracer_provider = TracerProvider()
trace.set_tracer_provider(tracer_provider)
# Set up OTLP exporter (export traces to OTLP-compatible backends)
otlp_exporter = OTLPSpanExporter(endpoint="http://your-otlp-endpoint:4317")
span_processor = BatchExportSpanProcessor(otlp_exporter)
tracer_provider.add_span_processor(span_processor)
# Initialize FastAPI app
app = FastAPI()
# Automatically instrument FastAPI with OpenTelemetry
FastAPIInstrumentor.instrument_app(app)
@app.get("/")
async def read_root():
return {"message": "Hello, FastAPI with OpenTelemetry!"}
In this example:
FastAPIInstrumentor.instrument_app(app) automatically instruments all FastAPI routes.
The OTLPSpanExporter sends the trace data to an OTLP-compatible backend like Honeycomb, Jaeger, or OpenSearch.
3. View Traces and MetricsWith automatic instrumentation enabled, OpenTelemetry will begin generating spans for every request to your FastAPI application.
These spans can be exported to your configured backend, where you can monitor traces, request durations, and any errors that occur.
2. Using Environment Variables to Configure OpenTelemetry in FastAPI
Environment variables provide a flexible way to configure OpenTelemetry without modifying your application’s code. They allow you to manage settings like exporter endpoints, trace sampling rates, and service metadata without the need for hardcoded values.
Here’s how to configure OpenTelemetry in a FastAPI app using environment variables:
Common OpenTelemetry Environment Variables
Service Name and Exporter ConfigurationThese environment variables control the service name, which is useful for distinguishing between multiple services, and the endpoint where telemetry data will be sent:
# Set the service name
export OTEL_SERVICE_NAME="fastapi-app"
# Set the endpoint for OTLP exporter (Jaeger, Honeycomb, etc.)
export OTEL_EXPORTER_OTLP_ENDPOINT="http://your-otlp-endpoint:4317"
# Set the endpoint for Jaeger exporter (if using Jaeger)
export OTEL_EXPORTER_JAEGER_AGENT_HOST="localhost"
export OTEL_EXPORTER_JAEGER_AGENT_PORT="5775"
Trace Sampling and Logging Configuration
Sampling controls how many traces OpenTelemetry captures, and logging is useful for debugging:
# Enable trace collection (adjust sampling rate as needed)
export OTEL_TRACES_SAMPLER="parentbased_probability"
export OTEL_TRACES_SAMPLER_ARG="0.5" # 50% sampling rate
# Enable debug logging for OpenTelemetry SDK (useful for troubleshooting)
export OTEL_LOG_LEVEL="debug"
Exporter Headers and Timeouts
If your exporter requires authentication (like an API token), you can pass it via headers:
# Set headers for authentication (if required by your exporter)
export OTEL_EXPORTER_OTLP_HEADERS="Authorization=Bearer YOUR_API_TOKEN"
# Set the timeout for the exporter
export OTEL_EXPORTER_OTLP_TIMEOUT="5" # Timeout in seconds
Metrics Exporter Configuration
If you want to collect and export metrics, OpenTelemetry allows you to configure the metrics exporter as well:
# Set up Prometheus exporter
export OTEL_METRICS_EXPORTER="prometheus"
export OTEL_EXPORTER_PROMETHEUS_PORT="9464" # Port where metrics will be exposed
With these environment variables, you can customize how OpenTelemetry collects and exports telemetry data without changing your application’s source code. You can store them in a .env file or configure them directly in your deployment environment (e.g., Docker, Kubernetes).
3. Extensibility of OpenTelemetry: Custom Instrumentation and Exporters
While OpenTelemetry provides automatic instrumentation for many popular frameworks, there are cases where you may need custom instrumentation to track specific parts of your FastAPI application, such as specific business logic or external service calls.
Custom Instrumentation
You can manually instrument any part of your FastAPI application by using the OpenTelemetry SDK. This allows you to create custom spans, add custom attributes to spans, and track specific workflows or functions.
Example: Creating a Custom Span in FastAPI
Here’s an example of how you can create custom spans for more granular traceability in your FastAPI app:
from fastapi import FastAPI
from opentelemetry import trace
app = FastAPI()
# Create a custom tracer
tracer = trace.get_tracer(__name__)
@app.get("/custom")
async def custom_route():
with tracer.start_as_current_span("custom_span"):
# Perform some custom logic
return {"message": "This is a custom span!"}
In this example, we manually create a span named custom_span for a specific route. This lets you instrument only the critical parts of your application where you need extra visibility.
Custom Exporters
If you need to export telemetry data to a service that OpenTelemetry doesn’t natively support, you can create a custom exporter. For example, if you want to send traces to a custom backend, you can extend OpenTelemetry's exporter interface.
Example: Creating a Custom Exporter
from opentelemetry.sdk.trace.export import SpanExporter
from opentelemetry.sdk.trace import ReadableSpan
class CustomExporter(SpanExporter):
def export(self, spans: list[ReadableSpan]) -> None:
for span in spans:
# Send the span data to your custom backend
print(f"Exporting span: {span}")
def shutdown(self) -> None:
# Clean up if needed
pass
# Register your custom exporter
custom_exporter = CustomExporter()
span_processor = BatchExportSpanProcessor(custom_exporter)
trace.get_tracer_provider().add_span_processor(span_processor)
In this example, the CustomExporter sends span data to a custom destination. You can modify this exporter to fit any backend you wish to integrate with OpenTelemetry.
Custom Metrics
Just like traces, you can define custom metrics in OpenTelemetry to monitor specific parts of your application. For example, tracking the number of users who access a specific route:
from opentelemetry.metrics import get_meter
from fastapi import FastAPI
app = FastAPI()
meter = get_meter(__name__)
# Create a custom counter metric
request_counter = meter.create_counter(
"custom_request_counter", description="Tracks custom requests"
)
@app.get("/track")
async def track_request():
request_counter.add(1) # Increment the counter for each request
return {"message": "Request tracked!"}
In this case, each time the /track route is accessed, and the counter is incremented, allowing you to monitor how many times this route is called.
Best Practices for OpenTelemetry with FastAPI
Here are some best practices for optimizing your OpenTelemetry integration with FastAPI:
1. Use Distributed Tracing for Microservices
If you have a microservices architecture, ensure you set up context propagation between services to get a full view of request flow across components.
2. Set Sampling Rates
If you’re concerned about performance overhead or excessive data collection, configure sampling rates to control how many traces are captured.
3. Use Tags and Attributes
Enrich your spans with custom tags or attributes that help you contextualize the data. For example, include user IDs or request IDs to track specific users or sessions.
4. Monitor Key Metrics
In addition to traces, track important metrics like request latency, error rates, and throughput. This can give you an immediate sense of how your system is performing.
Troubleshooting and Debugging with OpenTelemetry
Integrating OpenTelemetry with FastAPI can help you quickly pinpoint issues and optimize your application. Whether you're debugging slow response times or analyzing error rates, having traces and metrics at your disposal makes the process much faster and easier.
In cases where you notice issues, you can start by examining:
Traces: Look at the traces for the slow requests to identify which parts of the application are causing delays.
Metrics: Metrics like request duration and error rate can help you spot patterns and understand whether the issue is widespread or isolated.
Logs: Logs integrated with traces give you even more context, allowing you to tie specific events (like database failures or exceptions) to the corresponding traces.
Conclusion
Configuring environment variables for monitoring with OpenTelemetry is an effective way to manage telemetry collection in your application. Whether you're using OTLP, Jaeger, Prometheus, or any other backend, these environment variables allow for flexible configuration and easy management of your monitoring setup without modifying your codebase.
🤝
If you’d like to continue the conversation, join our Discord community! We have a dedicated channel where you can chat with other developers about your specific use case.