Skip to content

Commit

Permalink
update doc with Lambda python
Browse files Browse the repository at this point in the history
  • Loading branch information
cornelcroi committed Aug 28, 2024
1 parent 52efd2f commit ce417bf
Show file tree
Hide file tree
Showing 6 changed files with 246 additions and 158 deletions.
8 changes: 6 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,15 @@
<h2 align="center">Multi-Agent Orchestrator&nbsp;<img alt="Static Badge" src="https://img.shields.io/badge/Beta-4e9bcd"></h2>

<p align="center">Flexible and powerful framework for managing multiple AI agents and handling complex conversations.</p>

<p align="center">Flexible and powerful framework for managing multiple AI agents and handling complex conversations.</p>
<p align="center">
<a href="https://github.com/awslabs/multi-agent-orchestrator"><img alt="GitHub Repo" src="https://img.shields.io/badge/GitHub-Repo-green.svg" /></a>
<a href="https://github.com/awslabs/multi-agent-orchestrator"><img alt="GitHub Repo" src="https://img.shields.io/badge/GitHub-Repo-green.svg" /></a>
<a href="https://www.npmjs.com/package/multi-agent-orchestrator"><img alt="npm" src="https://img.shields.io/npm/v/multi-agent-orchestrator.svg?style=flat-square"></a>
<a href="https://pypi.org/project/multi-agent-orchestrator/"><img alt="PyPI" src="https://img.shields.io/pypi/v/multi-agent-orchestrator.svg?style=flat-square"></a>
</p>



## 🔖 Features

- 🧠 **Intelligent intent classification** — Dynamically route queries to the most suitable agent based on context and content.
Expand Down
3 changes: 2 additions & 1 deletion docs/astro.config.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -119,7 +119,8 @@ export default defineConfig({
label: 'Deployment',
items: [
{ label: 'Local Development', link: '/deployment/local' },
{ label: 'AWS Lambda Integration', link: '/deployment/aws-lambda' },
{ label: 'AWS Lambda Typescript', link: '/deployment/aws-lambda-ts' },
{ label: 'AWS Lambda Python', link: '/deployment/aws-lambda-py' },
{ label: 'Demo Web App', link: '/deployment/demo-web-app' },
]
},
Expand Down
172 changes: 172 additions & 0 deletions docs/src/content/docs/deployment/aws-lambda-py.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,172 @@
---
title: AWS Lambda Python
description: How to deploy the Multi-Agent Orchestrator System to AWS Lambda using Python
---

This guide walks you through deploying the Multi-Agent Orchestrator System to AWS Lambda using Python. It includes support for both streaming and non-streaming responses, allowing you to run your multi-agent setup in a serverless environment.

## Prerequisites

- AWS account with appropriate permissions
- AWS CLI installed and configured
- Python 3.8 or later installed
- Basic familiarity with AWS Lambda, API Gateway, and Boto3

## Deployment Steps

1. **Install Required Libraries**

Create a new directory for your project and navigate to it:

```bash
mkdir multi-agent-lambda && cd multi-agent-lambda
```

Create a virtual environment and activate it:

```bash
python -m venv venv
source venv/bin/activate # On Windows, use `venv\Scripts\activate`
```

Install the required packages:

```bash
pip install multi-agent-orchestrator boto3
```

After installation, create a requirements.txt file:

```bash
pip freeze > requirements.txt
```

2. **Prepare Your Code**

Create a new file, e.g., `lambda_function.py`, and add the following code:

```python
import json
from typing import Dict, Any
from multi_agent_orchestrator.orchestrator import MultiAgentOrchestrator, OrchestratorConfig
from multi_agent_orchestrator.agents import (
BedrockLLMAgent,
BedrockLLMAgentOptions
)

# Initialize orchestrator
orchestrator = MultiAgentOrchestrator(OrchestratorConfig(
LOG_AGENT_CHAT=True,
LOG_CLASSIFIER_CHAT=True,
LOG_CLASSIFIER_RAW_OUTPUT=True,
LOG_CLASSIFIER_OUTPUT=True,
LOG_EXECUTION_TIMES=True
))

# Add agents
tech_agent = BedrockLLMAgent(BedrockLLMAgentOptions(
name="Tech Agent",
streaming=True,
description="Specializes in technology areas including software development, hardware, AI, cybersecurity, blockchain, cloud computing, emerging tech innovations, and pricing/costs related to technology products and services.",
model_id="anthropic.claude-3-sonnet-20240229-v1:0",
inference_config={
"temperature": 0.1
}
))

health_agent = BedrockLLMAgent(BedrockLLMAgentOptions(
name="Health Agent",
description="Focuses on health and medical topics such as general wellness, nutrition, diseases, treatments, mental health, fitness, healthcare systems, and medical terminology or concepts."
))

orchestrator.add_agent(tech_agent)
orchestrator.add_agent(health_agent)

def lambda_handler(event: Dict[str, Any], context: Any) -> Dict[str, Any]:
# Parse the incoming event
body = json.loads(event.get('body', '{}'))
query = body.get('query')
user_id = body.get('userId')
session_id = body.get('sessionId')

# Route the request
response = orchestrator.route_request(query, user_id, session_id)

# Format the response
formatted_response = {
"statusCode": 200,
"headers": {
"Content-Type": "application/json"
},
"body": json.dumps({
"metadata": response.metadata.__dict__,
"streaming": response.streaming
}),
"isBase64Encoded": False
}

if response.streaming:
# For streaming responses, collect all chunks
chunks = [chunk for chunk in response.output]
formatted_response["body"] = json.dumps({
"metadata": response.metadata.__dict__,
"chunks": chunks,
"streaming": True
})
else:
# For non-streaming responses, include the full output
formatted_response["body"] = json.dumps({
"metadata": response.metadata.__dict__,
"output": response.output.content,
"streaming": False
})

return formatted_response
```

3. **Package Your Code**

Create a deployment package by zipping your `lambda_function.py` file and the `site-packages` directory from your virtual environment:

```bash
pip install --target ./package multi-agent-orchestrator boto3
cd package
zip -r ../deployment-package.zip .
cd ..
zip -g deployment-package.zip lambda_function.py
```

4. **Deploy Your Code**

Upload the `deployment-package.zip` file to your Lambda function using the AWS Management Console or AWS CLI.

5. **Configure IAM Permissions**

Ensure your Lambda function's execution role has permissions to:
- Invoke Amazon Bedrock models
- Write to CloudWatch Logs

6. **Configure Lambda Function**

- Set the runtime to Python 3.8 or later.
- Set the handler to `lambda_function.lambda_handler`.

7. **Test Your Deployment**

Use the AWS CLI to invoke your Lambda function:

```bash
aws lambda invoke --function-name your-function-name --payload '{"body": "{\"query\": \"What is artificial intelligence?\", \"userId\": \"user123\", \"sessionId\": \"session456\"}"}' output.json
```

Check the `output.json` file for the response.


## Considerations

- Be aware of Lambda execution time limits for long-running conversations.
- Monitor your Lambda function's performance and adjust the timeout and memory settings as needed.
- Consider implementing error handling and retry logic in your client application.
- For true real-time streaming, you may need to use API Gateway with WebSocket support. The current implementation collects all chunks before sending the response.

By following these steps, you'll have your Multi-Agent Orchestrator System running in AWS Lambda using Python, ready to handle both streaming and non-streaming responses in a serverless architecture.
Original file line number Diff line number Diff line change
@@ -1,10 +1,8 @@
---
title: AWS Lambda Deployment
description: How to deploy the Multi-Agent Orchestrator System to AWS Lambda with streaming support
title: AWS Lambda Typescript
description: How to deploy the Multi-Agent Orchestrator System to AWS Lambda with streaming support using Typescript
---

## Overview

Deploying the Multi-Agent Orchestrator System to AWS Lambda allows you to run your multi-agent setup in a serverless environment. This guide will walk you through the process of setting up and deploying your orchestrator to Lambda, including support for streaming responses.

## Prerequisites
Expand Down Expand Up @@ -131,61 +129,6 @@ Deploying the Multi-Agent Orchestrator System to AWS Lambda allows you to run yo

Check the `output.json` file for the streamed response.

## Handling Streaming Responses

To handle streaming responses in your client application:

1. Make a request to the Lambda function URL.
2. Parse the response as a stream of JSON objects.
3. Handle different types of messages:
- `metadata`: Contains information about the selected agent and other metadata.
- `chunk`: Contains a part of the streaming response.
- `complete`: Contains the full response for non-streaming agents.
- `error`: Indicates an error occurred.

Example client-side code (using fetch API):

```javascript
async function getStreamingResponse(query, userId, sessionId) {
const response = await fetch('https://your-lambda-function-url', {
method: 'POST',
body: JSON.stringify({ query, userId, sessionId }),
headers: {
'Content-Type': 'application/json',
// Include AWS IAM authentication headers here
},
});

const reader = response.body.getReader();
const decoder = new TextDecoder();

while (true) {
const { done, value } = await reader.read();
if (done) break;

const chunk = decoder.decode(value);
const messages = chunk.split('\n').filter(Boolean).map(JSON.parse);

for (const message of messages) {
switch (message.type) {
case 'metadata':
console.log('Metadata:', message.data);
break;
case 'chunk':
console.log('Chunk:', message.data);
break;
case 'complete':
console.log('Complete response:', message.data);
break;
case 'error':
console.error('Error:', message.data);
break;
}
}
}
}
```

## Considerations

- Ensure your client can handle streaming responses appropriately.
Expand Down
Loading

0 comments on commit ce417bf

Please sign in to comment.