Basic monitoring is only available for prompts retrieved directly from Basalt using the get method. This approach is designed for simple use cases where you’re using a single prompt and not complex workflows. For advanced monitoring scenarios, please refer to the Tracing documentation.

Basic Usage

When you retrieve a prompt using the SDK, it automatically includes a generation object that can be used to monitor the complete interaction. Here’s how to use it:

// Step 1: Get the prompt with monitoring enabled
const { value, generation } = await basalt.prompt.get('my-prompt-slug')

// Step 2: Use the prompt with your LLM provider
const output = await myLLMProvider.complete(value.text)

// Step 3: Record the completion by calling end() with the output
generation.end(output)

This simple pattern automatically tracks:

  • The prompt that was used
  • When the prompt was retrieved
  • When the completion was received
  • The full completion text
  • The total time taken for the interaction

User and Organization Identification

You can associate the generation with a specific user and organization by using the identify method:

// Identify the user and organization for this generation
generation.trace.identify({
  user: {
    id: 'user@example.com',
    name: 'User Name'
  },
  organization: {
    id: 'org_123',
    name: 'Organization Name'
  }
})

Evaluation

You can add automatic evaluators to assess the quality of the LLM output:

// Add an evaluator to automatically assess the response quality
generation.trace.addEvaluator({
  slug: 'good-answer'
})

Evaluators run automatically when generation.end() is called. For more information on available evaluators and how to create custom ones, see the Evaluation documentation.

Evaluation Configuration

By default, evaluations are run on 10% of logs. You can configure the evaluation sample rate to control how often evaluations are run.

// Configure evaluations at the trace level
generation.trace.setEvaluationConfig({
  sampleRate: 0.1  // Run evaluations on 10% of traces
})

Complete Example

Here’s a complete example showing how to use basic monitoring with variables, user identification, and automatic evaluation:

const result = await basalt.prompt.get('comment-classifier', {
  variables: {
    comment: 'I would like to contribute to this issue'
  }
})

if (result.error) {
  console.error(result.error)
  return
}

const { generation } = result

// Identify the user and organization
generation.trace.identify({
  user: {
    id: 'guillaume@getbasalt.ai',
    name: 'Guillaume Marquis'
  },
  organization: {
    id: 'org_123',
    name: 'Basalt'
  }
})

// Add an automatic evaluator
generation.trace.addEvaluator({
  slug: 'good-answer'
})

// Call your LLM provider (commented out in this example)
// const completion = await openai.chat.completions.create({
//   model: 'gpt-4o',
//   messages: [{ role: 'user', content: 'hey' }]
// })

// Record the LLM output
generation.end('output')

Advanced Use Cases

Basic monitoring is suitable for simple scenarios where you’re using a single prompt. For more advanced use cases, such as:

  • Tracking multi-step workflows
  • Monitoring prompts not stored in Basalt
  • Collecting detailed metrics
  • Creating custom traces
  • Building complex evaluation pipelines

Please refer to the Tracing documentation, which provides comprehensive tools for these scenarios.