Skip to content

Commit

Permalink
update docs
Browse files Browse the repository at this point in the history
  • Loading branch information
crazyyanchao committed Aug 3, 2024
1 parent 5e5293e commit 1a9e0a7
Show file tree
Hide file tree
Showing 7 changed files with 109 additions and 2 deletions.
54 changes: 54 additions & 0 deletions README-zh.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# LLMCompiler

[![English](https://img.shields.io/badge/English-Click-yellow)](README.md)
[![中文文档](https://img.shields.io/badge/中文文档-点击查看-orange)](README-zh.md)

 LLMCompiler 是一种 Agent 架构,旨在通过在DAG中快速执行任务来加快代理任务的执行速度。它还通过减少对 LLM 的调用次数来节省冗余令牌使用的成本。实现
灵感来自《An LLM Compiler for Parallel Function Calling》。

 当 Agent 需要调用大量工具时,此实现非常有用。如果您需要的工具超过 LLM 的上下文限制,您可以基于此工具扩展代理节点。将工具分为不同的
代理并组装它们以创建更强大的 LLMCompiler。另外已经有案例是在生产级应用中得到验证的,这个案例中配置了大约 60 种工具,与 Few-shot 搭配时准确率超过
90%。

## LLMCompiler 框架图

![LLMCompiler Frame Diagram](images/frame.png)

## 任务提取单元

![Task Fetching Unit](images/task-fetch.png)

## 使用方式

```shell
pip install llmcompiler
```

```py
from llmcompiler.result.chat import ChatRequest
from llmcompiler.tools.tools import DefineTools
from langchain_openai.chat_models.base import ChatOpenAI
from llmcompiler.chat.run import RunLLMCompiler

chat = ChatRequest(message="<YOUR_MESSAGE>")

# tools 是基于 Langchain BaseTool 的列表。
# 默认配置仅用于演示,建议继承BaseTool来实现Tool,这样可以更好地控制一些细节。
# 对于多参数依赖,可以继承 DAGFlowParams,实现参考为`llmcompiler/tools/basetool/fund_basic.py`。
tools = DefineTools().tools()

# 支持BaseLanguageModel的实现类。
llm = ChatOpenAI(model="gpt-4o", temperature=0, max_retries=3)

llm_compiler = RunLLMCompiler(chat, tools, llm)
result = llm_compiler()
print(result)

# 更多使用方式可以在`issue`中讨论,后续还会继续完善文档。
```

## 参考链接

- [论文: An LLM Compiler for Parallel Function Calling](https://arxiv.org/abs/2312.04511)
- [部分参考代码: LLMCompiler From Github](https://github.com/langchain-ai/langgraph/blob/main/examples/llm-compiler/LLMCompiler.ipynb)

51 changes: 51 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,57 @@
# LLMCompiler

[![English](https://img.shields.io/badge/English-Click-yellow)](README.md)
[![中文文档](https://img.shields.io/badge/中文文档-点击查看-orange)](README-zh.md)

&emsp;LLMCompiler is an Agent Architecture designed to speed up the execution of agent tasks by executing them quickly
in the DAG. It also saves the cost of redundant token use by reducing the number of calls to the LLM. The realization
inspiration comes from An LLM Compiler for Parallel Function Calling.

&emsp;This implementation is useful when the agent needs to call a large number of tools. If the tool you need exceeds
the context limit of the LLM, you can extend the agent node based on this tool.Divide the tool into different
agent and assemble them to create a more powerful LLMCompiler. Another case has been
proven in a production-level application, when about 60 Tools were configured, and the accuracy rate was more than 90%
when paired with few-shot.

## LLMCompiler Frame Diagram

![LLMCompiler Frame Diagram](images/frame.png)

## Task Fetching Unit

![Task Fetching Unit](images/task-fetch.png)

## How To Use

```shell
pip install llmcompiler
```

```py
from llmcompiler.result.chat import ChatRequest
from llmcompiler.tools.tools import DefineTools
from langchain_openai.chat_models.base import ChatOpenAI
from llmcompiler.chat.run import RunLLMCompiler

chat = ChatRequest(message="<YOUR_MESSAGE>")

# Langchain BaseTool List.
# The default configuration is only for demonstration, and it is recommended to inherit BaseTool to implement Tool, so that you can better control some details.
# For multi-parameter dependencies, DAGFlowParams can be inherited, and the implementation reference is 'llmcompiler/tools/basetool/fund_basic.py'.
tools = DefineTools().tools()

# The implementation class of BaseLanguageModel is supported.
llm = ChatOpenAI(model="gpt-4o", temperature=0, max_retries=3)

llm_compiler = RunLLMCompiler(chat, tools, llm)
result = llm_compiler()
print(result)

# More ways to use it can be discussed in the issue, and I will continue to improve the documentation in the future.
```

## Reference Linking

- [Paper: An LLM Compiler for Parallel Function Calling](https://arxiv.org/abs/2312.04511)
- [Partial Code: LLMCompiler From Github](https://github.com/langchain-ai/langgraph/blob/main/examples/llm-compiler/LLMCompiler.ipynb)

Binary file added images/frame.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file removed images/img.png
Binary file not shown.
Binary file added images/task-fetch.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
4 changes: 3 additions & 1 deletion llmcompiler/chat/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,9 @@ def init(self) -> CompiledGraph:
graph = graph_builder.compile()
print(
f"==========================初始化工具集和Agent:{round(time.time() - start_time, 2)}秒==========================")
graph.get_graph().print_ascii()
print("We can convert a graph class into Mermaid syntax.")
print("On https://www.min2k.com/tools/mermaid/, you can view visual results of Mermaid syntax.")
print(graph.get_graph().draw_mermaid())
return graph

def should_continue(self, state: List[BaseMessage]):
Expand Down
2 changes: 1 addition & 1 deletion setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@

setup(
name='llmcompiler',
version="1.0.2",
version="1.0.3",
author="Yc-Ma",
author_email="[email protected]",
description='LLMCompiler',
Expand Down

0 comments on commit 1a9e0a7

Please sign in to comment.