Auto-Instrumentation
Auto-instrumentation creates spans for common operations (LLM calls, vector searches, framework chains) without adding@observe decorators around every SDK call.
Use it to get immediate visibility into provider-level operations, then add manual spans for your business logic where it matters.
Supported providers
Basalt supports these instrumentation names:- LLMs:
openai,anthropic,google_generativeai,cohere,bedrock,vertexai,ollama,mistralai,together,replicate - Vector DBs:
chromadb,pinecone,qdrant - Frameworks:
langchain,llamaindex,haystack
Installation
Auto-instrumentation is installed as extras to keep the core SDK lightweight:Enable instrumentation
Enable all installed providers
Enable only specific providers
Disable specific providers
What gets captured
Auto-instrumented spans typically include:- Operation name and duration
- Provider/model identifiers (for LLMs)
- Token usage and errors (when the underlying SDK exposes them)
Example (OpenAI)
Context propagation
Auto-instrumented spans inherit context set bystart_observe (identity, experiment, metadata, evaluators). Use manual observe spans for your business logic and let auto-instrumentation cover provider calls.
Best practices
- Enable only the providers you use (
enabled_instruments) to keep overhead low. - Don’t double-instrument: if a call is auto-instrumented, avoid wrapping the same call in a manual
observespan. - Call
basalt.shutdown()when your process exits to flush traces.