Skip to content

RAG that intelligently adapts to your use case, data, and queries

License

Notifications You must be signed in to change notification settings

circlemind-ai/fast-graphrag

Repository files navigation

circlemind fast-graphrag

Streamlined and promptable Fast GraphRAG framework designed for interpretable, high-precision, agent-driven retrieval workflows.
Looking for a Managed Service? »

Note

Using The Wizard of Oz, fast-graphrag costs $0.08 vs. graphrag $0.48 — a 6x costs saving that further improves with data size and number of insertions.

News (and Coming Soon)

  • Support for IDF weightening of entities
  • Support for generic entities and concepts (initial commit)
  • [2024.12.02] Benchmarks comparing Fast GraphRAG to LightRAG, GraphRAG and VectorDBs released here

Features

  • Interpretable and Debuggable Knowledge: Graphs offer a human-navigable view of knowledge that can be queried, visualized, and updated.
  • Fast, Low-cost, and Efficient: Designed to run at scale without heavy resource or cost requirements.
  • Dynamic Data: Automatically generate and refine graphs to best fit your domain and ontology needs.
  • Incremental Updates: Supports real-time updates as your data evolves.
  • Intelligent Exploration: Leverages PageRank-based graph exploration for enhanced accuracy and dependability.
  • Asynchronous & Typed: Fully asynchronous, with complete type support for robust and predictable workflows.

Fast GraphRAG is built to fit seamlessly into your retrieval pipeline, giving you the power of advanced RAG, without the overhead of building and designing agentic workflows.

Install

Install from source (recommended for best performance)

# clone this repo first
cd fast_graphrag
poetry install

Install from PyPi (recommended for stability)

pip install fast-graphrag

Quickstart

Set the OpenAI API key in the environment:

export OPENAI_API_KEY="sk-..."

Download a copy of A Christmas Carol by Charles Dickens:

curl https://raw.githubusercontent.com/circlemind-ai/fast-graphrag/refs/heads/main/mock_data.txt > ./book.txt

Optional: Set the limit for concurrent requests (e.g., to control the number of tasks processed simultaneously)

export CONCURRENT_TASK_LIMIT=8

Use the Python snippet below:

from fast_graphrag import GraphRAG

DOMAIN = "Analyze this story and identify the characters. Focus on how they interact with each other, the locations they explore, and their relationships."

EXAMPLE_QUERIES = [
    "What is the significance of Christmas Eve in A Christmas Carol?",
    "How does the setting of Victorian London contribute to the story's themes?",
    "Describe the chain of events that leads to Scrooge's transformation.",
    "How does Dickens use the different spirits (Past, Present, and Future) to guide Scrooge?",
    "Why does Dickens choose to divide the story into \"staves\" rather than chapters?"
]

ENTITY_TYPES = ["Character", "Animal", "Place", "Object", "Activity", "Event"]

grag = GraphRAG(
    working_dir="./book_example",
    domain=DOMAIN,
    example_queries="\n".join(EXAMPLE_QUERIES),
    entity_types=ENTITY_TYPES
)

with open("./book.txt") as f:
    grag.insert(f.read())

print(grag.query("Who is Scrooge?").response)

The next time you initialize fast-graphrag from the same working directory, it will retain all the knowledge automatically.

Examples

Please refer to the examples folder for a list of tutorials on common use cases of the library:

  • custom_llm.py: a brief example on how to configure fast-graphrag to run with different OpenAI API compatible language models and embedders;
  • checkpointing.ipynb: a tutorial on how to use checkpoints to avoid irreversible data corruption;
  • query_parameters.ipynb: a tutorial on how to use the different query parameters. In particular, it shows how to include references to the used information in the provided answer (using the with_references=True parameter).

Contributing

Whether it's big or small, we love contributions. Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated. Check out our guide to see how to get started.

Not sure where to get started? You can join our Discord and ask us any questions there.

Philosophy

Our mission is to increase the number of successful GenAI applications in the world. To do that, we build memory and data tools that enable LLM apps to leverage highly specialized retrieval pipelines without the complexity of setting up and maintaining agentic workflows.

Fast GraphRAG currently exploit the personalized pagerank algorithm to explore the graph and find the most relevant pieces of information to answer your query. For an overview on why this works, you can check out the HippoRAG paper here.

Open-source or Managed Service

This repo is under the MIT License. See LICENSE.txt for more information.

The fastest and most reliable way to get started with Fast GraphRAG is using our managed service. Your first 100 requests are free every month, after which you pay based on usage.

circlemind fast-graphrag demo

To learn more about our managed service, book a demo or see our docs.

About

RAG that intelligently adapts to your use case, data, and queries

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages