Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Advanced Function Calling example #116

Open
bebechien opened this issue Dec 10, 2024 · 1 comment
Open

Advanced Function Calling example #116

bebechien opened this issue Dec 10, 2024 · 1 comment

Comments

@bebechien
Copy link
Collaborator

Description of the bug:

The recent PR #114 for fine-tuning Gemma for function calling focuses solely on generating a complete response in one go. However, it lacks an example demonstrating the typical function calling workflow.

To showcase the full capabilities of this fine-tuned model, we need an example that illustrates the following steps:

  1. Prompt Definition and Function Registration: Define the user prompt and register the available functions for Gemma to utilize.
  2. Function Call Identification: Gemma analyzes the prompt and determines that a function call is necessary.
  3. Function Selection and Argument Generation: Gemma selects the most appropriate function and generates the required arguments.
  4. Function Execution: The chosen function is executed with the generated arguments.
  5. Response Integration: The function's output is integrated into the final response to the user.

This comprehensive example will better demonstrate the end-to-end process of using the fine-tuned Gemma model for function calling.

Actual vs expected behavior:

No response

Any other information you'd like to share?

No response

@bebechien
Copy link
Collaborator Author

bebechien commented Dec 16, 2024

The base Gemma 2 2B model, without fine-tuning, already attempts to follow instructions in some way. See below example. (tested on AI Studio)

Prompt:

user
You are a helpful assistant with access to the following functions. Use them if required -
{
"name": "calculate_median",
"description": "Calculate the median of a list of numbers",
"parameters": {
"type": "object",
"properties": {
"numbers": {
"type": "array",
"items": {
"type": "number"
},
"description": "The list of numbers"
}
}
"required": [
"numbers"
]
}
}
To use these functions respond with:
{"name": "function_name", "arguments": {"arg_1": "value_1", "arg_1": "value_1", ...}}

Then finally respond with:
Answer:

user
USER: Hi, I have a list of numbers and I need to find the median. The numbers are [5, 2, 9, 1, 7, 4, 6, 3, 8]

model

Output:

{"name": "calculate_median", "arguments": {"numbers": [5, 2, 9, 1, 7, 4, 6, 3, 8]}}

Answer:

The median of the list [5, 2, 9, 1, 7, 4, 6, 3, 8] is 4. 

TkNNAVSwFXxDSbh

I think this presents challenges for demonstrating the effectiveness of fine-tuning, especially with function calling. IIRC, similar issues have been observed with chain-of-thought prompting, which is no longer required by Gemma 2 for certain tasks. The optimal approach to showcasing the benefits of fine-tuning in this context remains unclear.

BTW. the correct answer for the median is 5.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant