Using Prompts in Python
The Basalt Python SDK gives you a simple, high-level interface to work with prompts from your Python code.You can list prompts, fetch specific versions or tags, inspect metadata, and control which prompt version is used in each environment. Typical use cases include:
- Connecting your app to centrally managed prompts
- Rolling out new prompt versions safely via tags
- Injecting runtime data into prompts with variables
- Keeping model configuration close to prompt content
Initialization
Initialize the client once and reuse it across your application.Core Methods
All operations are available in both synchronous and asynchronous forms.-
basalt.prompts.list_sync(feature_slug=None)/await basalt.prompts.list(feature_slug=None):
List all prompts accessible to your API key, with their metadata. Optionally filter byfeature_slug. -
basalt.prompts.get_sync(...)/await basalt.prompts.get(...):
Retrieve a specific prompt’s rendered text and model configuration, using either atag(e.g.production) or a specificversion. Returns a context manager wrapper around the Prompt object. -
basalt.prompts.describe_sync(slug, version=None, tag=None)/await basalt.prompts.describe(slug, version=None, tag=None):
Get detailed metadata for a single prompt, including all available versions and tags, without fetching the full prompt text. -
basalt.prompts.publish_sync(...)/await basalt.prompts.publish(...):
Publish a version to a tag (for example, mark version1.3.0asproduction).
Prompt Object
When you retrieve a prompt, you receive a context manager wrapper around a Prompt object. The wrapper can be used directly or as a context manager:slug: Unique identifier for the promptversion: The concrete version number being used (e.g."1.2.0")tag: The tag that was resolved (if used withtag=parameter)text: The final prompt text (after variable substitution, if provided)raw_text: The original template text before variable substitutionsystem_text: System/instruction text (after variable substitution), if definedraw_system_text: Original system template text before substitutionmodel: Model configuration object:provider: e.g."openai","anthropic"model: e.g."gpt-4","claude-3-opus"version: Model versionparameters: Model parameters object with:temperature(float): Sampling temperaturemax_length(int): Maximum completion tokenstop_p(float): Nucleus sampling parametertop_k(float, optional): Top-k samplingfrequency_penalty(float, optional): Frequency penaltypresence_penalty(float, optional): Presence penaltyresponse_format(str): Response format (e.g.,"text","json")json_object(dict, optional): JSON schema for structured outputtools(optional): Function/tool definitions for function-calling
tools: Shorthand formodel.parameters.toolsvariables: The variables dict that was used in compilation (if any)
prompt.text and prompt.model directly into your LLM client of choice.
Variable Substitution
Prompts can contain variables using Jinja2 syntax:get_sync / get, pass a variables dictionary:
variables, the prompt is returned with raw placeholders, which is useful for debugging or previewing templates.
Caching
The SDK caches prompt resolutions for a short time to reduce latency and API calls. No configuration is required for most apps.Error Handling
The SDK raises specific exceptions so you can react appropriately:NotFoundError: The prompt, tag, or version doesn’t exist.UnauthorizedError: Invalid or missing API key.NetworkError: Network connectivity issues when calling the Basalt API.BasaltAPIError: Other API-level errors (validation, server issues, etc.).
try/except blocks and handle these errors based on your application’s needs (e.g. fallback to a default prompt, log and return a safe message, etc.).