Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature request: Send Traces to Observability Frameworks #152

Open
maxritter opened this issue Dec 12, 2024 · 9 comments
Open

Feature request: Send Traces to Observability Frameworks #152

maxritter opened this issue Dec 12, 2024 · 9 comments
Assignees
Labels
enhancement New feature or request

Comments

@maxritter
Copy link

maxritter commented Dec 12, 2024

Use case

For production use-cases, we need some way to send traces of the orchestrator and agents to an Observability Framework like Langfuse, so they can be visualized there nicely and combined with user feedback.

Right now, it is unclear to me how this is possible. Only getting text from logs is not sufficient for debugging when a lot of users utilize the system.

Solution/User Experience

Need some kind of mechanism to write traces to external SDKs like the one from Langfuse (https://langfuse.com/docs/integrations/amazon-bedrock).

Alternative solutions

No response

@maxritter maxritter changed the title Feature request: Add support for Tracing Feature request: Add support for Tracing in External Tools Dec 12, 2024
@maxritter maxritter changed the title Feature request: Add support for Tracing in External Tools Feature request: Send Traces to Observability Frameworks Dec 12, 2024
@cornelcroi
Copy link
Contributor

Hi @maxritter actually you can create a custom logger (or use your own) that can send the logs where you need.
Have a look here https://awslabs.github.io/multi-agent-orchestrator/cookbook/monitoring/logging/

@cornelcroi cornelcroi self-assigned this Dec 12, 2024
@maxritter
Copy link
Author

@cornelcroi Thanks a lot for the hint!. With that I could at least use https://github.com/kislyuk/watchtower to push those logs to Cloudwatch and then search for them with the trace information that I record via the Python SDK before / after a model invication in LangFuse (https://langfuse.com/docs/sdk/python/example).

If anybody knows a smarter way to handle observability on Bedrock Agents, I am curious to know!

@brnaba-aws
Copy link
Contributor

For observability with Bedrock Agents, we could also forward the trace to the Logger.

@maxritter
Copy link
Author

@brnaba-aws Thanks for your help. I will try to implement this to send the input, output and reasoning steps to LangFuse like in this example from AWS (https://aws.amazon.com/de/blogs/machine-learning/accelerate-analysis-and-discovery-of-cancer-biomarkers-with-amazon-bedrock-agents/):

image

@brnaba-aws
Copy link
Contributor

Hi @maxritter
Please let us know how it goes, we might provide this as an example in our repo for others.

@brnaba-aws
Copy link
Contributor

@maxritter
Here is a suggestion:

from typing import Optional

def create_observability_decorator(
    framework_name: str = "langfuse",
    config: Optional[Dict] = None
):
    """
    Factory function that returns the appropriate decorator based on the framework
    with optional configuration
    """
    config = config or {}
    
    if framework_name == "langfuse":
        from langfuse.decorators import observe
        return observe(**config)
    elif framework_name == "opentelemetry":
        from opentelemetry.trace import trace
        return trace(**config)
    elif framework_name == "honeycomb":
        from honeycomb.trace import span
        return span(**config)
    else:
        def noop_decorator(func):
            @wraps(func)
            async def wrapper(*args, **kwargs):
                return await func(*args, **kwargs) if inspect.iscoroutinefunction(func) else func(*args, **kwargs)
            return wrapper
        return noop_decorator

# Usage example with configuration:
observe_llm = create_observability_decorator(
    framework_name="langfuse",
    config={"name": "llm_request", "capture_input": True}
)

@observe_llm
async def handle_single_response(self, input_data: Dict) -> Any:
    # ... rest of the code

Do you believe this is sufficient?
Side note: I didn't test :-)

@brnaba-aws
Copy link
Contributor

We have started some experimentation with langfuse and minimal modification in our framework.
So far so good. We need to finalize a bit the design

@maxritter
Copy link
Author

maxritter commented Dec 21, 2024

@brnaba-aws Thanks for your efforts! I do not have time in the next couple of days to tackle this, but the way I thought about implementing it would be using https://langfuse.com/docs/sdk/python/low-level-sdk to include:

  • Traces (Tracks agent execution from input query to final output including token usage and all spans / generations)
  • Spans (Log sub-events like Knowledge Base call including input / output, execution time)
  • Generations (Metadata about model calls including input / output messages, cost, execution time)

Here is a good example of how the different type of events can be captured after calling the invoke_agent function from Boto3: https://github.com/aws-samples/amazon-bedrock-agents-cancer-biomarker-discovery/blob/c86dd6c07a445d8c3134d2eb11db5667ebf49945/streamlitapp/app.py#L230

It's not a complete list of all things that can be captured, there is even more in https://docs.aws.amazon.com/bedrock/latest/userguide/trace-events.html.

Sending all those emitted information to an open-source observability platform like LangFuse would be quite powerful in my mind to move towards serious production use-cases and be a strong alternative to the combination of LangGraph and LangSmith, where a similar approach is already in-place.

Having this in place, folks can utilize https://github.com/aws-samples/deploy-langfuse-on-ecs-with-fargate to deploy their self-hosted LangFuse deployment to ECS.

@brnaba-aws
Copy link
Contributor

Thanks for sharing this. I think, as a best option for us, we would have a callback for each steps (before/after calling LLM, tools, Retriever, Agents...)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Development

No branches or pull requests

3 participants