Understand the purpose and benefits of monitoring your AI applications with Basalt.
Monitoring is a crucial component of any AI application, providing visibility into how your LLMs are performing in production. Basalt’s monitoring system helps you track, analyze, and optimize your AI interactions to ensure they’re delivering value to your users.
AI applications present unique monitoring challenges compared to traditional software. Large language models can be unpredictable, produce varying outputs for similar inputs, and their behavior can evolve over time. Effective monitoring helps you:
Identify issues before they impact users
Track performance metrics like response time and token usage
Detect unexpected behaviors such as hallucinations or inappropriate content
Optimize costs by identifying inefficient processes
Basic monitoring is the simplest way to track AI interactions. It’s automatically included when you use Basalt-managed prompts and requires minimal code changes:
Copy
Ask AI
// Get a prompt from Basalt (includes monitoring)const { value, generation } = await basalt.prompt.get('prompt-slug')// Use the prompt with your LLM providerconst response = await yourLLM.generate(value.text)// Record the completiongeneration.end(response)
This approach is ideal for simple workflows where you’re using a single prompt to generate a response.
For more complex workflows involving multiple steps, parallel processes, or branching logic, Basalt offers a comprehensive tracing system:
Copy
Ask AI
// Create a trace for a multi-step workflowconst trace = basalt.monitor.createTrace('feature-slug')// Add logs and generations to track each stepconst classificationLog = trace.createLog({ name: 'classify-input' })const generationLog = trace.createGeneration({ name: 'generate-response' })// Record results at each stepclassificationLog.end(classificationResult)generationLog.end(generatedResponse)// End the trace when the workflow completestrace.end(finalResult)
Tracing gives you deeper visibility into your AI application’s behavior and performance, allowing you to identify bottlenecks and optimize each step of your workflow.
By integrating monitoring into your AI applications from the beginning, you’ll gain valuable insights that help you deliver more reliable, performant, and cost-effective AI experiences to your users.