Overview
Basalt’s observability system gives you end-to-end visibility into your AI workloads—from HTTP handlers and background jobs down to prompts, LLM calls, tools, and evaluators. It is built on OpenTelemetry and centered on two primitives:start_observe– creates root spans that represent entire requests or workflowsobserve– creates child spans for nested operations (LLM calls, RAG, tools, etc.)
Major v1 changes
- Unified
observe/start_observeAPI for tracing, logging, and context - Full OpenTelemetry support with automatic context propagation (sync and async)
- Auto-instrumentation for LLMs, vector DBs, and popular frameworks
- First-class identity, experiments, and evaluators attached to traces
- Consistent APIs for sync and async functions (same decorators / context managers)
Root spans with start_observe
Every trace starts with a root span created by start_observe. Use this at the entry points of your system (HTTP handlers, workers, CLI commands).
Nested spans with observe
Use observe to create child spans that describe meaningful units of work:
- LLM generations
- Retrieval / RAG
- Tool and function execution
- Generic business logic
ObserveKind.GENERATION, RETRIEVAL, TOOL, etc.) make traces easier to explore and filter in the Basalt UI.
Enriching spans
You can attach additional information to the current active span using static helpers:observe.set_identity(...)– set or update user/org identityobserve.metadata(...)/observe.update_metadata(...)– add metadataobserve.set_input(...)/observe.set_output(...)– capture inputs/outputs
Async monitoring
The same decorators work for async functions. For advanced use, explicit async variantsasync_start_observe and async_observe are also available.
Client Initialization
Basic initialization
The simplest way to initialize Basalt:With observability metadata
Attach global metadata that will be added to all traces:With telemetry configuration
For advanced configuration of OpenTelemetry behavior and auto-instrumentation:With selective auto-instrumentation
Enable or disable auto-instrumentation providers:Shutdown
Always callshutdown() before your application exits to flush pending telemetry:
TelemetryConfig Reference
enabled_providers/disabled_providers:
- LLMs:
openai,anthropic,google_generativeai,bedrock,vertexai,ollama,mistralai,together,replicate - Vector DBs:
chromadb,pinecone,qdrant - Frameworks:
langchain,llamaindex,haystack
Environment Variables
You can also configure Basalt using environment variables:| Variable | Purpose | Example |
|---|---|---|
BASALT_API_KEY | API authentication key | sk-... |
BASALT_TELEMETRY_ENABLED | Enable/disable telemetry | true or false |
BASALT_SERVICE_NAME | Service name for traces | my-app |
BASALT_ENVIRONMENT | Deployment environment | production |
BASALT_LOG_LEVEL | Log level for Basalt loggers | DEBUG, INFO, WARNING |
BASALT_ENABLED_INSTRUMENTS | Comma-separated list of providers to enable | openai,anthropic |
BASALT_DISABLED_INSTRUMENTS | Comma-separated list of providers to disable | langchain |
BASALT_OTEL_EXPORTER_OTLP_ENDPOINT | Custom OTLP endpoint | http://localhost:4317 |
BASALT_SAMPLE_RATE | Global evaluation sample rate | 0.1 |