Skip to content

Commit

Permalink
update classifier docs
Browse files Browse the repository at this point in the history
  • Loading branch information
cornelcroi committed Dec 25, 2024
1 parent 6cf6c04 commit 1da9f22
Show file tree
Hide file tree
Showing 2 changed files with 356 additions and 142 deletions.
267 changes: 193 additions & 74 deletions docs/src/content/docs/classifiers/built-in/anthropic-classifier.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,12 @@ The Anthropic Classifier extends the abstract `Classifier` class and uses the An
- Supports custom system prompts and variables
- Handles conversation history for context-aware classification

### Basic Usage
### Default Model

The classifier uses Claude 3.5 Sonnet as its default model:
```typescript
ANTHROPIC_MODEL_ID_CLAUDE_3_5_SONNET = "claude-3-5-sonnet-20240620"
```

### Python Package

Expand All @@ -24,6 +29,8 @@ If you haven't already installed the Anthropic-related dependencies, make sure t
pip install "multi-agent-orchestrator[anthropic]"
```

### Basic Usage

To use the AnthropicClassifier, you need to create an instance with your Anthropic API key and pass it to the Multi-Agent Orchestrator:

import { Tabs, TabItem } from '@astrojs/starlight/components';
Expand Down Expand Up @@ -55,119 +62,231 @@ import { Tabs, TabItem } from '@astrojs/starlight/components';
</TabItem>
</Tabs>

### Custom Configuration
## System Prompt and Variables

You can customize the AnthropicClassifier by providing additional options:
### Full Default System Prompt

<Tabs syncKey="runtime">
<TabItem label="TypeScript" icon="seti:typescript" color="blue">
```typescript
const customAnthropicClassifier = new AnthropicClassifier({
apiKey: 'your-anthropic-api-key',
modelId: 'claude-3-sonnet-20240229',
inferenceConfig: {
maxTokens: 500,
temperature: 0.7,
topP: 0.9,
stopSequences: ['']
}
});
The default system prompt used by the classifier is comprehensive and includes examples of both simple and complex interactions:

const orchestrator = new MultiAgentOrchestrator({ classifier: customAnthropicClassifier });
```
</TabItem>
<TabItem label="Python" icon="seti:python">
```python
from multi_agent_orchestrator.classifiers import AnthropicClassifier, AnthropicClassifierOptions
from multi_agent_orchestrator.orchestrator import MultiAgentOrchestrator
```
You are AgentMatcher, an intelligent assistant designed to analyze user queries and match them with
the most suitable agent or department. Your task is to understand the user's request,
identify key entities and intents, and determine which agent or department would be best equipped
to handle the query.
Important: The user's input may be a follow-up response to a previous interaction.
The conversation history, including the name of the previously selected agent, is provided.
If the user's input appears to be a continuation of the previous conversation
(e.g., "yes", "ok", "I want to know more", "1"), select the same agent as before.
Analyze the user's input and categorize it into one of the following agent types:
<agents>
{{AGENT_DESCRIPTIONS}}
</agents>
If you are unable to select an agent put "unknown"
Guidelines for classification:
Agent Type: Choose the most appropriate agent type based on the nature of the query.
For follow-up responses, use the same agent type as the previous interaction.
Priority: Assign based on urgency and impact.
High: Issues affecting service, billing problems, or urgent technical issues
Medium: Non-urgent product inquiries, sales questions
Low: General information requests, feedback
Key Entities: Extract important nouns, product names, or specific issues mentioned.
For follow-up responses, include relevant entities from the previous interaction if applicable.
For follow-ups, relate the intent to the ongoing conversation.
Confidence: Indicate how confident you are in the classification.
High: Clear, straightforward requests or clear follow-ups
Medium: Requests with some ambiguity but likely classification
Low: Vague or multi-faceted requests that could fit multiple categories
Is Followup: Indicate whether the input is a follow-up to a previous interaction.
Handle variations in user input, including different phrasings, synonyms,
and potential spelling errors.
For short responses like "yes", "ok", "I want to know more", or numerical answers,
treat them as follow-ups and maintain the previous agent selection.
Here is the conversation history that you need to take into account before answering:
<history>
{{HISTORY}}
</history>
Skip any preamble and provide only the response in the specified format.
```

custom_anthropic_classifier = AnthropicClassifier(AnthropicClassifierOptions(
api_key='your-anthropic-api-key',
model_id='claude-3-sonnet-20240229',
inference_config={
'max_tokens': 500,
'temperature': 0.7,
'top_p': 0.9,
'stop_sequences': ['']
}
))
### Variable Replacements

orchestrator = MultiAgentOrchestrator(classifier=custom_anthropic_classifier)
```
</TabItem>
</Tabs>
#### AGENT_DESCRIPTIONS Example
```
tech-support-agent:Specializes in resolving technical issues, software problems, and system configurations
billing-agent:Handles all billing-related queries, payment processing, and subscription management
customer-service-agent:Manages general inquiries, account questions, and product information requests
sales-agent:Assists with product recommendations, pricing inquiries, and purchase decisions
```

The AnthropicClassifier accepts the following configuration options:
### Extended HISTORY Examples

- `api_key` (required): Your Anthropic API key.
- `model_id` (optional): The ID of the Anthropic model to use. Defaults to Claude 3.5 Sonnet.
- `inference_config` (optional): A dictionary containing inference configuration parameters:
- `max_tokens` (optional): The maximum number of tokens to generate. Defaults to 1000 if not specified.
- `temperature` (optional): Controls randomness in output generation.
- `top_p` (optional): Controls diversity of output generation.
- `stop_sequences` (optional): A list of sequences that, when generated, will stop the generation process.
The conversation history is formatted to include agent names in the responses, allowing the classifier to track which agent handled each interaction. Each assistant response is prefixed with `[agent-name]` in the history, making it clear who provided each response:

```
user: I need help with my subscription
assistant: [billing-agent] I can help you with your subscription. What specific information do you need?
user: The premium features aren't working
assistant: [tech-support-agent] I'll help you troubleshoot the premium features. Could you tell me which specific features aren't working?
user: The cloud storage says I only have 5GB but I'm supposed to have 100GB
assistant: [tech-support-agent] Let's verify your subscription status and refresh your storage allocation. When did you last see the correct storage amount?
user: How much am I paying for this subscription?
assistant: [billing-agent] I'll check your subscription details. Your current plan is $29.99/month for the Premium tier with 100GB storage. Would you like me to review your billing history?
user: Yes please
```

## Customizing the System Prompt
Here, the history shows the conversation moving between `billing-agent` and `tech-support-agent` as the topic shifts between billing and technical issues.


The agent prefixing (e.g., `[agent-name]`) is automatically handled by the Multi-Agent Orchestrator when formatting the conversation history. This helps the classifier understand:
- Which agent handled each part of the conversation
- The context of previous interactions
- When agent transitions occurred
- How to maintain continuity for follow-up responses

## Tool-Based Response Structure

The AnthropicClassifier uses a tool specification to enforce structured output from the model. This is a design pattern that ensures consistent and properly formatted responses.

### The Tool Specification
```json
{
"name": "analyzePrompt",
"description": "Analyze the user input and provide structured output",
"input_schema": {
"type": "object",
"properties": {
"userinput": {"type": "string"},
"selected_agent": {"type": "string"},
"confidence": {"type": "number"}
},
"required": ["userinput", "selected_agent", "confidence"]
}
}
```

### Why Use Tools?

1. **Structured Output**: Instead of free-form text, the model must provide exactly the data structure we need.
2. **Guaranteed Format**: The tool schema ensures we always get:
- A valid agent identifier
- A properly formatted confidence score
- All required fields
3. **Implementation Note**: The tool isn't actually executed - it's a pattern to force the model to structure its response in a specific way that maps directly to our `ClassifierResult` type.

Example Response:
```json
{
"userinput": "I need to reset my password",
"selected_agent": "tech-support-agent",
"confidence": 0.95
}
```

You can customize the system prompt used by the AnthropicClassifier:
### Customizing the System Prompt

You can override the default system prompt while maintaining the required agent descriptions and history variables. Here's how to do it:

<Tabs syncKey="runtime">
<TabItem label="TypeScript" icon="seti:typescript" color="blue">
```typescript
orchestrator.classifier.setSystemPrompt(
`
Custom prompt template with placeholders:
`You are a specialized routing expert with deep knowledge of {{INDUSTRY}} operations.
Your available agents are:
<agents>
{{AGENT_DESCRIPTIONS}}
</agents>
Consider these key factors for {{INDUSTRY}} when routing:
{{INDUSTRY_RULES}}
Recent conversation context:
<history>
{{HISTORY}}
{{CUSTOM_PLACEHOLDER}}
`,
</history>
Route based on industry best practices and conversation history.`,
{
CUSTOM_PLACEHOLDER: "Value for custom placeholder"
INDUSTRY: "healthcare",
INDUSTRY_RULES: [
"- HIPAA compliance requirements",
"- Patient data privacy protocols",
"- Emergency request prioritization",
"- Insurance verification processes"
]
}
);
```
</TabItem>
<TabItem label="Python" icon="seti:python">
```python
orchestrator.classifier.set_system_prompt(
"""
Custom prompt template with placeholders:
"""You are a specialized routing expert with deep knowledge of {{INDUSTRY}} operations.
Your available agents are:
<agents>
{{AGENT_DESCRIPTIONS}}
</agents>
Consider these key factors for {{INDUSTRY}} when routing:
{{INDUSTRY_RULES}}
Recent conversation context:
<history>
{{HISTORY}}
{{CUSTOM_PLACEHOLDER}}
""",
</history>
Route based on industry best practices and conversation history.""",
{
"CUSTOM_PLACEHOLDER": "Value for custom placeholder"
"INDUSTRY": "healthcare",
"INDUSTRY_RULES": [
"- HIPAA compliance requirements",
"- Patient data privacy protocols",
"- Emergency request prioritization",
"- Insurance verification processes"
]
}
)
```
</TabItem>
</Tabs>

## Processing Requests

The AnthropicClassifier processes requests using the `process_request` method, which is called internally by the orchestrator. This method:
Note: When customizing the prompt, you must include:
- The `{{AGENT_DESCRIPTIONS}}` variable to list available agents
- The `{{HISTORY}}` variable for conversation context
- Clear instructions for agent selection
- Response format expectations

1. Prepares the user's message.
2. Constructs a request for the Anthropic API, including the system prompt and tool configurations.
3. Sends the request to the Anthropic API and processes the response.
4. Returns a `ClassifierResult` containing the selected agent and confidence score.
## Configuration Options

## Error Handling
The AnthropicClassifier accepts the following configuration options:

The AnthropicClassifier includes error handling to manage potential issues during the classification process. If an error occurs, it will log the error and raise an exception, which can be caught and handled by the orchestrator.
- `api_key` (required): Your Anthropic API key.
- `model_id` (optional): The ID of the Anthropic model to use. Defaults to Claude 3.5 Sonnet.
- `inference_config` (optional): A dictionary containing inference configuration parameters:
- `max_tokens` (optional): The maximum number of tokens to generate. Defaults to 1000.
- `temperature` (optional): Controls randomness in output generation.
- `top_p` (optional): Controls diversity of output generation.
- `stop_sequences` (optional): A list of sequences that will stop generation.

## Best Practices

1. **API Key Security**: Ensure your Anthropic API key is kept secure and not exposed in your codebase.
2. **Model Selection**: Choose an appropriate model based on your use case and performance requirements.
3. **Inference Configuration**: Experiment with different inference parameters to find the best balance between response quality and speed.
4. **System Prompt**: Craft a clear and comprehensive system prompt to guide the model's classification process effectively.
1. **API Key Security**: Keep your Anthropic API key secure and never expose it in your code.
2. **Model Selection**: Choose appropriate models based on your needs and performance requirements.
3. **Inference Configuration**: Experiment with different parameters to optimize classification accuracy.
4. **System Prompt**: Consider customizing the system prompt for your specific use case, while maintaining the core classification structure.

## Limitations

- Requires an active Anthropic API key.
- Classification quality depends on the chosen model and the quality of your system prompt and agent descriptions.
- API usage is subject to Anthropic's pricing and rate limits.
- Requires an active Anthropic API key
- Subject to Anthropic's API pricing and rate limits
- Classification quality depends on the quality of agent descriptions and system prompt

For more information on using and customizing the Multi-Agent Orchestrator, refer to the [Classifier Overview](/multi-agent-orchestrator/classifier/overview) and [Agents](/multi-agent-orchestrator/agents/overview) documentation.
For more information, see the [Classifier Overview](/multi-agent-orchestrator/classifier/overview) and [Agents](/multi-agent-orchestrator/agents/overview) documentation.
Loading

0 comments on commit 1da9f22

Please sign in to comment.