Skip to main content

Using Prompts in Python

The Basalt Python SDK gives you a simple, high-level interface to work with prompts from your Python code.
You can list prompts, fetch specific versions or tags, inspect metadata, and control which prompt version is used in each environment.
Typical use cases include:
  • Connecting your app to centrally managed prompts
  • Rolling out new prompt versions safely via tags
  • Injecting runtime data into prompts with variables
  • Keeping model configuration close to prompt content

Initialization

Initialize the client once and reuse it across your application.
from basalt import Basalt

basalt = Basalt(api_key="your-api-key")
When your application is shutting down (for example in a CLI or worker), call:
basalt.shutdown()
to clean up resources.

Core Methods

All operations are available in both synchronous and asynchronous forms.
  • basalt.prompts.list_sync() / basalt.prompts.list_async():
    List all prompts accessible to your API key, with their basic metadata (slug, description, latest version, tags).
  • basalt.prompts.get_sync(...) / basalt.prompts.get(...):
    Retrieve a specific prompt’s rendered text and model configuration, using either a tag (e.g. production) or a specific version.
  • basalt.prompts.describe_sync(slug):
    Get detailed metadata for a single prompt, including all available versions and tags, without fetching the full prompt text. Async variants may also be available; refer to the Python SDK reference.
  • basalt.prompt.publish_sync(...) / basalt.prompt.publish(...):
    Publish a version to a tag (for example, mark version 1.3.0 as production).
Use sync methods in scripts and simple backends; prefer async methods in async web frameworks (FastAPI, Starlette, etc.) to avoid blocking the event loop.

Prompt Object

When you retrieve a prompt, you receive a structured object containing:
  • slug: Unique identifier for the prompt
  • version: The concrete version number being used (e.g. "1.2.0")
  • text: The final prompt text (after variable substitution, if provided)
  • description: Human-readable description of the prompt’s purpose
  • model: Model configuration object:
    • provider: e.g. "openai", "anthropic"
    • model: e.g. "gpt-4.1", "claude-3-opus"
    • parameters: e.g. temperature, max_tokens, top_p
  • tags: List of tags currently pointing to this version (e.g. ["latest", "staging"])
You can feed prompt.text and prompt.model directly into your LLM client of choice.

Variable Substitution

Prompts can contain variables using Jinja2 syntax:
Hello {{ customer_name }}, welcome to {{ product_name }}!
When calling get_sync / get, pass a variables dictionary:
prompt = basalt.prompts.get_sync(
    slug="welcome-message",
    tag="latest",
    variables={
        "customer_name": "Alice",
        "product_name": "Premium Plan",
    },
)
print(prompt.text)
# -> "Hello Alice, welcome to Premium Plan!"
If you omit variables, the prompt is returned with raw placeholders, which is useful for debugging or previewing templates.

Caching

The SDK caches prompt resolutions for a short time to reduce latency and API calls. No configuration is required for most apps.

Error Handling

The SDK raises specific exceptions so you can react appropriately:
  • NotFoundError: The prompt, tag, or version doesn’t exist.
  • UnauthorizedError: Invalid or missing API key.
  • NetworkError: Network connectivity issues when calling the Basalt API.
  • BasaltAPIError: Other API-level errors (validation, server issues, etc.).
Wrap your calls in try/except blocks and handle these errors based on your application’s needs (e.g. fallback to a default prompt, log and return a safe message, etc.).