You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The basic logic is to let LLM decide which dataset is the best to use by auto-generating prompts as below:
node_2.register_for_llm(
name="query_vectordb",
description="""Retrieves relevant information and generates a response based on the query.Available datasets:- Dataset ID 3: New Dataset2 - Description of New Dataset. Something new.- Dataset ID 4: FLAML - FLAML documents.Choose the appropriate dataset ID based on the query.""",
)(query_vectordb)
LLM will deduce the best fit based on the question. As you can see, I abandoned the RAG nodes in AutoGen, replaced it with a custom tool/function query_vectordb.
However, this is still in the middle of a proof-of-concept.
You don't need to choose dataset anywhere, just need to enable RAG in the converse config between two agents. Of course we need to make sure each dataset has proper name and description.
Great progress!How do knowledge bases relate to RAG agents?
The text was updated successfully, but these errors were encountered: