Skip to main content

Using Prompts in Python

The Basalt Python SDK gives you a simple, high-level interface to work with prompts from your Python code.
You can list prompts, fetch specific versions or tags, inspect metadata, and control which prompt version is used in each environment.
Typical use cases include:
  • Connecting your app to centrally managed prompts
  • Rolling out new prompt versions safely via tags
  • Injecting runtime data into prompts with variables
  • Keeping model configuration close to prompt content

Initialization

Initialize the client once and reuse it across your application.
from basalt import Basalt

basalt = Basalt(api_key="your-api-key")
When your application is shutting down (for example in a CLI or worker), call:
basalt.shutdown()
to clean up resources.

Core Methods

All operations are available in both synchronous and asynchronous forms.
  • basalt.prompts.list_sync(feature_slug=None) / await basalt.prompts.list(feature_slug=None):
    List all prompts accessible to your API key, with their metadata. Optionally filter by feature_slug.
  • basalt.prompts.get_sync(...) / await basalt.prompts.get(...):
    Retrieve a specific prompt’s rendered text and model configuration, using either a tag (e.g. production) or a specific version. Returns a context manager wrapper around the Prompt object.
  • basalt.prompts.describe_sync(slug, version=None, tag=None) / await basalt.prompts.describe(slug, version=None, tag=None):
    Get detailed metadata for a single prompt, including all available versions and tags, without fetching the full prompt text.
  • basalt.prompts.publish_sync(...) / await basalt.prompts.publish(...):
    Publish a version to a tag (for example, mark version 1.3.0 as production).
Use sync methods in scripts and simple backends; prefer async methods in async web frameworks (FastAPI, Starlette, etc.) to avoid blocking the event loop.

Prompt Object

When you retrieve a prompt, you receive a context manager wrapper around a Prompt object. The wrapper can be used directly or as a context manager:
# Imperative: access prompt directly
prompt = basalt.prompts.get_sync("my-prompt")
print(prompt.text)

# Context manager: scopes tracing to the prompt
with basalt.prompts.get_sync("my-prompt") as prompt:
    response = llm.generate(prompt.text)
The Prompt object contains:
  • slug: Unique identifier for the prompt
  • version: The concrete version number being used (e.g. "1.2.0")
  • tag: The tag that was resolved (if used with tag= parameter)
  • text: The final prompt text (after variable substitution, if provided)
  • raw_text: The original template text before variable substitution
  • system_text: System/instruction text (after variable substitution), if defined
  • raw_system_text: Original system template text before substitution
  • model: Model configuration object:
    • provider: e.g. "openai", "anthropic"
    • model: e.g. "gpt-4", "claude-3-opus"
    • version: Model version
    • parameters: Model parameters object with:
      • temperature (float): Sampling temperature
      • max_length (int): Maximum completion tokens
      • top_p (float): Nucleus sampling parameter
      • top_k (float, optional): Top-k sampling
      • frequency_penalty (float, optional): Frequency penalty
      • presence_penalty (float, optional): Presence penalty
      • response_format (str): Response format (e.g., "text", "json")
      • json_object (dict, optional): JSON schema for structured output
      • tools (optional): Function/tool definitions for function-calling
  • tools: Shorthand for model.parameters.tools
  • variables: The variables dict that was used in compilation (if any)
You can feed prompt.text and prompt.model directly into your LLM client of choice.

Variable Substitution

Prompts can contain variables using Jinja2 syntax:
Hello {{ customer_name }}, welcome to {{ product_name }}!
When calling get_sync / get, pass a variables dictionary:
prompt = basalt.prompts.get_sync(
    slug="welcome-message",
    tag="latest",
    variables={
        "customer_name": "Alice",
        "product_name": "Premium Plan",
    },
)
print(prompt.text)
# -> "Hello Alice, welcome to Premium Plan!"
If you omit variables, the prompt is returned with raw placeholders, which is useful for debugging or previewing templates.

Caching

The SDK caches prompt resolutions for a short time to reduce latency and API calls. No configuration is required for most apps.

Error Handling

The SDK raises specific exceptions so you can react appropriately:
  • NotFoundError: The prompt, tag, or version doesn’t exist.
  • UnauthorizedError: Invalid or missing API key.
  • NetworkError: Network connectivity issues when calling the Basalt API.
  • BasaltAPIError: Other API-level errors (validation, server issues, etc.).
Wrap your calls in try/except blocks and handle these errors based on your application’s needs (e.g. fallback to a default prompt, log and return a safe message, etc.).