Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(flows): add flows #262

Open
wants to merge 7 commits into
base: main
Choose a base branch
from
Open

feat(flows): add flows #262

wants to merge 7 commits into from

Conversation

Tomas2D
Copy link
Contributor

@Tomas2D Tomas2D commented Dec 18, 2024

This PR introduces (experimental) Flow 🚀. Flexible and extensible component for managing and executing workflows.

Flow class is created by providing a schema (zod) that defines the state, and an optional output schema that ensures the flow finishes and the result is in the desired shape. One can then add steps.

AgentFlow class serves as a simplified interface for creating multi-agent flows.

Step has name and handler, which receives a current state and execution context, returns the updates to the state, and optionally a name of the next step it wants to jump to. The handler can be either a sync/async function or a different flow. One can also add a custom validation schema to ensure data consistency and type safety so you can be sure that when the flow enters the given step, it is in a particular state. Hops between steps are, by default, in the order they were added, but each step can define where it wants to jump. Also, the starting node can be changed.

▶️ More complex examples can be seen in changed files.

Simple Example

import { Flow } from "bee-agent-framework/flows";
import { z } from "zod";

const schema = z.object({
  hops: z.number().default(0),
});

// Definition
const flow = new Flow({ schema })
  .addStep("a", async (state) => ({})) // does nothing
  .addStep("b", async (state) => ({ // adds one and moves to b
    update: { hops: state.hops + 1 },
  }))
  .addStep("c", async (state) => ({
    update: { hops: state.hops + 1 },
    next: Math.random() > 0.5 ? Flow.PREV : Flow.END,
  }))

// Execution + Observability
const response = await flow.run({ hops: 0 }).observe((emitter) => {
  emitter.on("start", (data) => console.log(`-> start ${data.step}`));
  emitter.on("error", (data) => console.log(`-> error ${data.step}`));
  emitter.on("success", (data) => console.log(`-> finish ${data.step}`));
});

// Outputs
console.log(`Hops: ${response.result.hops}`)
console.log(`-> steps`, resonse.steps.map((step) => step.name).join(","));

Agent Flows

The following example is a simple CLI application that retrieves data from the user and calls simple multi-agentic flow while preserving conversation history.

import "dotenv/config";
import { BAMChatLLM } from "bee-agent-framework/adapters/bam/chat";
import { UnconstrainedMemory } from "bee-agent-framework/memory/unconstrainedMemory";
import { createConsoleReader } from "examples/helpers/io.js";
import { OpenMeteoTool } from "bee-agent-framework/tools/weather/openMeteo";
import { WikipediaTool } from "bee-agent-framework/tools/search/wikipedia";
import { AgentFlow } from "bee-agent-framework/experimental/flows/agent";
import { BaseMessage, Role } from "bee-agent-framework/llms/primitives/message";

// Flow creation
const flow = new AgentFlow();
flow.addAgent({
  name: "WeatherAgent",
  instructions: "You are a weather assistant. Respond only if you can provide a useful answer.",
  tools: [new OpenMeteoTool()],
  llm: BAMChatLLM.fromPreset("meta-llama/llama-3-1-70b-instruct"),
});
flow.addAgent({
  name: "ResearchAgent",
  instructions: "You are a researcher assistant. Respond only if you can provide a useful answer.",
  tools: [new WikipediaTool()],
  llm: BAMChatLLM.fromPreset("meta-llama/llama-3-1-70b-instruct"),
});
flow.addAgent({
  name: "FinalAgent",
  instructions:
    "You are a helpful assistant. Your task is to make a final answer from the current conversation, starting with the last user message, that provides all useful information.",
  tools: [],
  llm: BAMChatLLM.fromPreset("meta-llama/llama-3-1-70b-instruct"),
});

const reader = createConsoleReader();
const memory = new UnconstrainedMemory();

// Reading user input
for await (const { prompt } of reader) {
  await memory.add(
    BaseMessage.of({
      role: Role.USER,
      text: prompt,
    }),
  );

  const { result } = await flow.run(memory.messages).observe((emitter) => {
    // logging intermediate steps
    emitter.on("success", (data) => {
      reader.write(`-> ${data.step}`, data.response?.update?.finalAnswer ?? "-");
    });
  });

  await memory.addMany(result.newMessages);
  reader.write(`Agent 🤖`, result.finalAnswer);
}

Other features

  • built-in integration for traceability
  • validation and type safety
  • error handling (FlowError)
  • composability (step can be another flow)

Ref: #254

@Tomas2D Tomas2D requested a review from a team as a code owner December 18, 2024 20:04
@Tomas2D Tomas2D marked this pull request as draft December 18, 2024 20:04
@Tomas2D Tomas2D force-pushed the feat/flows branch 2 times, most recently from 9c545ba to 3e86be9 Compare December 18, 2024 21:31
Signed-off-by: Tomas Dvorak <[email protected]>
@Tomas2D Tomas2D changed the title feat(flows): init the module feat(flows): add flows Dec 18, 2024
Signed-off-by: Tomas Dvorak <[email protected]>
Signed-off-by: Tomas Dvorak <[email protected]>
Signed-off-by: Tomas Dvorak <[email protected]>
Signed-off-by: Tomas Dvorak <[email protected]>
Signed-off-by: Tomas Dvorak <[email protected]>
Signed-off-by: Tomas Dvorak <[email protected]>
@Tomas2D Tomas2D marked this pull request as ready for review December 20, 2024 12:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants