-
Notifications
You must be signed in to change notification settings - Fork 5.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add MagenticOne API #4782
base: main
Are you sure you want to change the base?
Add MagenticOne API #4782
Conversation
…treamline initialization
…and initialization instructions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice!
Let's create a doc/blog that has contents similar to the m1 package readme https://github.com/microsoft/autogen/tree/main/python/packages/autogen-magentic-one
let's also add a hil_mode (human-in-loop mode) (default is False) where we optionally add the user proxy as part of the agent team like the m1 example https://github.com/microsoft/autogen/blob/main/python/packages/autogen-magentic-one/examples/example.py
Logging are no longer necessary! so I think this is good to go |
I pushed a small commit to disable ocr of the websurfer, I am only seeing refusals for OCR so it is a wasted API call |
@gagb Maybe also point to this API in the readme of the magentic-one package and say this is a preferred way to use magentic-one. I wonder if it's also worth a reference in the main readme of autogen or in other prominent places |
""" | ||
|
||
def __init__(self, client: OpenAIChatCompletionClient): | ||
def __init__(self, client: OpenAIChatCompletionClient, hil_mode: bool = False): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use autogen_core.models.ChatCompletionClient
base class instead of concrete class for the argument type
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did this because we only tested with openai. Do you still think I should change it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah I see. Though this client has already been used by many for Ollama models. Also there is AzureOpenAIChatCompletionClient, which we also tested on. So I think we should still update it to use the base class.
Instead of rely on type, I think we need to validate the model capabilities in the constructor -- unless it has already been done in the base class. Sorry on mobile so a bit hard to switch page here.
We can also validate the model name, and raise warning if the model is something we haven't tested.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Okay good point. I will do this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. I think we should update the MagenticOne page to point to this API. Would be good to have this team as a preset in Studio also. @victordibia |
…nAIChatCompletionClient
…ess to initialization arguments
…bility with OpenAI GPT-4o model
What if we added a simple CLI that looks like this if you install pip install autogen_ext
m1 "find me a nice shwarma place in my neighborhood" |
This rocks!! |
Enables a simple API