diff --git a/README-zh.md b/README-zh.md new file mode 100644 index 0000000..94b0ee5 --- /dev/null +++ b/README-zh.md @@ -0,0 +1,54 @@ +# LLMCompiler + +[![English](https://img.shields.io/badge/English-Click-yellow)](README.md) +[![中文文档](https://img.shields.io/badge/中文文档-点击查看-orange)](README-zh.md) + + LLMCompiler 是一种 Agent 架构,旨在通过在DAG中快速执行任务来加快代理任务的执行速度。它还通过减少对 LLM 的调用次数来节省冗余令牌使用的成本。实现 +灵感来自《An LLM Compiler for Parallel Function Calling》。 + + 当 Agent 需要调用大量工具时,此实现非常有用。如果您需要的工具超过 LLM 的上下文限制,您可以基于此工具扩展代理节点。将工具分为不同的 +代理并组装它们以创建更强大的 LLMCompiler。另外已经有案例是在生产级应用中得到验证的,这个案例中配置了大约 60 种工具,与 Few-shot 搭配时准确率超过 +90%。 + +## LLMCompiler 框架图 + +![LLMCompiler Frame Diagram](images/frame.png) + +## 任务提取单元 + +![Task Fetching Unit](images/task-fetch.png) + +## 使用方式 + +```shell +pip install llmcompiler +``` + +```py +from llmcompiler.result.chat import ChatRequest +from llmcompiler.tools.tools import DefineTools +from langchain_openai.chat_models.base import ChatOpenAI +from llmcompiler.chat.run import RunLLMCompiler + +chat = ChatRequest(message="") + +# tools 是基于 Langchain BaseTool 的列表。 +# 默认配置仅用于演示,建议继承BaseTool来实现Tool,这样可以更好地控制一些细节。 +# 对于多参数依赖,可以继承 DAGFlowParams,实现参考为`llmcompiler/tools/basetool/fund_basic.py`。 +tools = DefineTools().tools() + +# 支持BaseLanguageModel的实现类。 +llm = ChatOpenAI(model="gpt-4o", temperature=0, max_retries=3) + +llm_compiler = RunLLMCompiler(chat, tools, llm) +result = llm_compiler() +print(result) + +# 更多使用方式可以在`issue`中讨论,后续还会继续完善文档。 +``` + +## 参考链接 + +- [论文: An LLM Compiler for Parallel Function Calling](https://arxiv.org/abs/2312.04511) +- [部分参考代码: LLMCompiler From Github](https://github.com/langchain-ai/langgraph/blob/main/examples/llm-compiler/LLMCompiler.ipynb) + diff --git a/README.md b/README.md index 892d4e8..12bcfed 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,57 @@ # LLMCompiler +[![English](https://img.shields.io/badge/English-Click-yellow)](README.md) +[![中文文档](https://img.shields.io/badge/中文文档-点击查看-orange)](README-zh.md) +  LLMCompiler is an Agent Architecture designed to speed up the execution of agent tasks by executing them quickly in the DAG. It also saves the cost of redundant token use by reducing the number of calls to the LLM. The realization inspiration comes from An LLM Compiler for Parallel Function Calling. + This implementation is useful when the agent needs to call a large number of tools. If the tool you need exceeds +the context limit of the LLM, you can extend the agent node based on this tool.Divide the tool into different +agent and assemble them to create a more powerful LLMCompiler. Another case has been +proven in a production-level application, when about 60 Tools were configured, and the accuracy rate was more than 90% +when paired with few-shot. + +## LLMCompiler Frame Diagram + +![LLMCompiler Frame Diagram](images/frame.png) + +## Task Fetching Unit + +![Task Fetching Unit](images/task-fetch.png) + +## How To Use + +```shell +pip install llmcompiler +``` + +```py +from llmcompiler.result.chat import ChatRequest +from llmcompiler.tools.tools import DefineTools +from langchain_openai.chat_models.base import ChatOpenAI +from llmcompiler.chat.run import RunLLMCompiler + +chat = ChatRequest(message="") + +# Langchain BaseTool List. +# The default configuration is only for demonstration, and it is recommended to inherit BaseTool to implement Tool, so that you can better control some details. +# For multi-parameter dependencies, DAGFlowParams can be inherited, and the implementation reference is 'llmcompiler/tools/basetool/fund_basic.py'. +tools = DefineTools().tools() + +# The implementation class of BaseLanguageModel is supported. +llm = ChatOpenAI(model="gpt-4o", temperature=0, max_retries=3) + +llm_compiler = RunLLMCompiler(chat, tools, llm) +result = llm_compiler() +print(result) + +# More ways to use it can be discussed in the issue, and I will continue to improve the documentation in the future. +``` + +## Reference Linking + +- [Paper: An LLM Compiler for Parallel Function Calling](https://arxiv.org/abs/2312.04511) +- [Partial Code: LLMCompiler From Github](https://github.com/langchain-ai/langgraph/blob/main/examples/llm-compiler/LLMCompiler.ipynb) + diff --git a/images/frame.png b/images/frame.png new file mode 100644 index 0000000..2cd3a4a Binary files /dev/null and b/images/frame.png differ diff --git a/images/img.png b/images/img.png deleted file mode 100644 index 9ac502e..0000000 Binary files a/images/img.png and /dev/null differ diff --git a/images/task-fetch.png b/images/task-fetch.png new file mode 100644 index 0000000..b440398 Binary files /dev/null and b/images/task-fetch.png differ diff --git a/llmcompiler/chat/run.py b/llmcompiler/chat/run.py index 27a7efb..bc07ce9 100644 --- a/llmcompiler/chat/run.py +++ b/llmcompiler/chat/run.py @@ -41,7 +41,9 @@ def init(self) -> CompiledGraph: graph = graph_builder.compile() print( f"==========================初始化工具集和Agent:{round(time.time() - start_time, 2)}秒==========================") - graph.get_graph().print_ascii() + print("We can convert a graph class into Mermaid syntax.") + print("On https://www.min2k.com/tools/mermaid/, you can view visual results of Mermaid syntax.") + print(graph.get_graph().draw_mermaid()) return graph def should_continue(self, state: List[BaseMessage]): diff --git a/setup.py b/setup.py index 8c32ffb..ae3245b 100644 --- a/setup.py +++ b/setup.py @@ -3,7 +3,7 @@ setup( name='llmcompiler', - version="1.0.2", + version="1.0.3", author="Yc-Ma", author_email="yanchaoma@foxmail.com", description='LLMCompiler',