Guide for implementing Grafana Tempo - a high-scale distributed tracing backend for OpenTelemetry traces. Use when configuring Tempo deployments, setting up storage backends (S3, Azure Blob, GCS), writing TraceQL queries, deploying via Helm, understanding trace structure, or troubleshooting Tempo issues on Kubernetes.
Comprehensive guide for Grafana Tempo - the cost-effective, high-scale distributed tracing backend designed for OpenTelemetry.
Tempo is a high-scale distributed tracing backend that:
X-Scope-OrgID header| Component | Purpose |
|---|---|
| Distributor | Entry point for trace data, routes to ingesters via consistent hash ring |
| Ingester | Buffers traces in memory, creates Parquet blocks, flushes to storage |
| Query Frontend | Query orchestration, shards blockID space, coordinates queriers |
| Querier | Locates traces in ingesters or storage using bloom filters |
| Compactor | Compresses blocks, deduplicates data, manages retention |
| Metrics Generator | Optional: derives metrics from traces |
Write Path:
Applications → Collector → Distributor → Ingester → Object Storage
↓
Consistent Hash Ring
(routes by traceID)
Read Path:
Query Request → Query Frontend → Queriers → Ingesters (recent data)
↓ ↓
Block Sharding Object Storage (historical data)
↓ ↓
Parallel Querier Work Bloom Filters + Indexes
-target=all)-target=scalable-single-binary)# Using tempo-distributed Helm chart