Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SpeziLLMOpenAI: Replace MacPaw/OpenAI With Generated API Calls #64

Open
wants to merge 51 commits into
base: main
Choose a base branch
from

Conversation

paulhdk
Copy link
Contributor

@paulhdk paulhdk commented Aug 27, 2024

SpeziLLMOpenAI: Replace MacPaw/OpenAI With Generated API Calls

♻️ Current situation & Problem

This PR replaces the MacPaw/OpenAI package with generated API calls by the swift-openapi-generator package.
Calls are generated from OpenAI's official OpenAPI spec.
Closes: #49

This PR does not add any new features but simply replicates the existing feature set with the generated API calls.

I've tried my best to keep track of any known issues in-code with FIXMEs as well as in the following list.

Current Issues

  • Sources/SpeziLLMOpenAI/LLMOpenAISession+Generation.swift does not handle the "[DONE]" message sent by the API to conclude a stream. There is currently a hacky workaround that catches the error that is thrown in that case. I'm not quite sure yet how to handle that case elegantly.
  • There are several do ... catch blocks that catch OpenAI package-specific errors, which I had to commented out. I have not found a semantically equivalent solution for the generated API calls yet.
  • The LLMFunctionParameterItemSchema type does not use a generated type yet.
  • The convenience initialisers in SpeziLLMOpenAI/FunctionCalling should be, if possible, refactored, as they currently have a lot of optional bindings.
  • Correct error handling
  • Currently, openapi-generator-swift expects and an openapi.yaml and a configuration file in the TestApp, which is why there are duplicate openapi specs and configuration files in this PR. I'm not quite sure why it's expecting them in the TestApp, but I suspect it has something to do with the generated types being used in the TestApp's model selection mechanism.
  • The SpeziLLMTests are currently not passing. Because the test errors are realted to the above issues, I’ll update the tests once I’ve addressed all of the issues above.

⚙️ Release Notes

  • Replace the MacPaw/OpenAI package with Apple/swift-openapi-generator, which is able to generate API calls directly from OpenAI's official OpenAPI spec.

📚 Documentation

As no new functionality is added, nothing should change here (unless I missed something).

✅ Testing

This PR passes the existing tests. Since no new functionality has been added, I believe this should suffice.

📝 Code of Conduct & Contributing Guidelines

By submitting creating this pull request, you agree to follow our Code of Conduct and Contributing Guidelines:

@paulhdk paulhdk requested a review from PSchmiedmayer August 27, 2024 10:09
@PSchmiedmayer PSchmiedmayer removed their request for review August 30, 2024 18:51
@PSchmiedmayer PSchmiedmayer added the enhancement New feature or request label Aug 30, 2024
@paulhdk paulhdk force-pushed the generate-api-calls branch 2 times, most recently from d5a4eb3 to 3ded304 Compare September 13, 2024 16:33
@paulhdk paulhdk marked this pull request as ready for review September 13, 2024 17:49
@paulhdk paulhdk force-pushed the generate-api-calls branch 3 times, most recently from 48f3c0b to 974449b Compare October 4, 2024 21:20
Copy link

codecov bot commented Oct 4, 2024

Codecov Report

Attention: Patch coverage is 54.38596% with 260 lines in your changes missing coverage. Please review.

Project coverage is 38.58%. Comparing base (fd9f1ca) to head (42f9079).
Report is 1 commits behind head on main.

Files with missing lines Patch % Lines
...peziLLMOpenAI/LLMOpenAISession+Configuration.swift 0.00% 80 Missing ⚠️
...ources/SpeziLLMOpenAI/LLMOpenAISession+Setup.swift 0.00% 47 Missing ⚠️
...s/SpeziLLMOpenAI/LLMOpenAISession+Generation.swift 0.00% 27 Missing ⚠️
Sources/SpeziLLMOpenAI/LLMOpenAIError.swift 0.00% 22 Missing ⚠️
...ng/LLMFunctionParameterWrapper+OptionalTypes.swift 86.56% 16 Missing ⚠️
...SpeziLLMOpenAI/Helpers/LLMOpenAIStreamResult.swift 0.00% 16 Missing ⚠️
...nCalling/LLMFunctionParameterSchemaCollector.swift 0.00% 11 Missing ⚠️
...lling/LLMFunctionParameterWrapper+ArrayTypes.swift 88.74% 8 Missing ⚠️
...tionCalling/LLMFunctionParameterWrapper+Enum.swift 85.72% 8 Missing ⚠️
...g/LLMFunctionParameterWrapper+PrimitiveTypes.swift 83.68% 8 Missing ⚠️
... and 4 more
Additional details and impacted files

Impacted file tree graph

@@            Coverage Diff             @@
##             main      #64      +/-   ##
==========================================
+ Coverage   38.52%   38.58%   +0.07%     
==========================================
  Files          65       65              
  Lines        2355     2517     +162     
==========================================
+ Hits          907      971      +64     
- Misses       1448     1546      +98     
Files with missing lines Coverage Δ
...penAI/Configuration/LLMOpenAIModelParameters.swift 100.00% <100.00%> (ø)
...iLLMOpenAI/Configuration/LLMOpenAIParameters.swift 100.00% <ø> (ø)
Sources/SpeziLLMOpenAI/Helpers/Chat+OpenAI.swift 0.00% <ø> (ø)
...I/Onboarding/LLMOpenAIAPITokenOnboardingStep.swift 100.00% <ø> (ø)
...enAI/Onboarding/LLMOpenAIModelOnboardingStep.swift 97.88% <100.00%> (+0.10%) ⬆️
.../FunctionCalling/LLMFunctionParameterWrapper.swift 61.54% <50.00%> (+24.04%) ⬆️
...ling/LLMFunctionParameterWrapper+CustomTypes.swift 92.00% <91.31%> (-8.00%) ⬇️
Sources/SpeziLLMOpenAI/LLMOpenAISession.swift 22.86% <0.00%> (ø)
...urces/SpeziLLMOpenAI/LLMOpenAIAuthMiddleware.swift 0.00% <0.00%> (ø)
...lling/LLMFunctionParameterWrapper+ArrayTypes.swift 89.88% <88.74%> (-10.12%) ⬇️
... and 9 more

... and 7 files with indirect coverage changes


Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update fd9f1ca...42f9079. Read the comment docs.

Copy link
Member

@PSchmiedmayer PSchmiedmayer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for all the work here @paulhdk; very important to improve this setup and build on the OpenAPI specification!

It would be amazing to get a first insight from @philippzagar to get a good round of feedback.

Package.swift Outdated Show resolved Hide resolved
@@ -51,7 +50,7 @@ public struct LLMOpenAIModelParameters: Sendable {
/// - logitBias: Alters specific token's likelihood in completion.
/// - user: Unique identifier for the end-user, aiding in abuse monitoring.
public init(
responseFormat: ChatQuery.ResponseFormat? = nil,
responseFormat: Components.Schemas.CreateChatCompletionRequest.response_formatPayload? = nil,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wondering if we should add compact type aliases for this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I’ve added an LLMOpenAIRequestType alias. Does that work for you?

Should we also introduce an alias for Components.Schemas in general? This won’t make the types shorter, but something like LLMOpenAIGeneratedTypes could improve readability, maybe?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we can introduce well defined and named typealias for the specific types that we use in our API surface; we should see if we can make them compact and focus on them.

Comment on lines +31 to +34
/// "firstName": [
/// "type": "string",
/// "description": "The first name of the person")
/// ],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am wondering if we can add a nicely typed type for this instead of a dictionary; it can always map to a dictionary under the hood. Would be cool to avoid loosing that type-safe element?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Previously, SpeziLLMOpenAI wrapped around the Swift types provided by the OpenAI package, which would then eventually be passed to the API.
With the OpenAI OpenAPI spec, such types aren't generated, but the JSON schemas are instead validated for correctness as they're being encoded in the OpenAPIObjectContainer type.

Introducing such wrapper types again would require precise alignment with the OpenAI, which would make it, I could imagine, harder to maintain over time.
I could imagine that’s one reason why the official OpenAI Python package, which is also generated from the OpenAI OpenAPI specification, does not offer wrapper types either, AFAICT.

What do you think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think adding an extension initializer/function that takes the well-typed arguments of one wants to use them would be beneficial and would avoid issues with string keys that are not correct or malformatted. Still allowing to pass in a dictionary might be an escape hatch that we can still provide. The OpenAPI surface is quite stable and if we use e.g. an enum for the type of the parameter can also have an other case with an associated string value.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I most definitely agree with @PSchmiedmayer here, I would follow the typical Swift paradigm and provide as much type safety as possible.
As mentioned by @PSchmiedmayer, I would implement well-typed inits / functions etc. that then map to the underlying String dictionary. And yes, an escape hatch that passes the raw dictionary might be beneficial!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just to be clear and because I'm a little clueless about how to implement this easily: the only way I'm seeing here is to re-implement these two types from the MacPaw OpenAI package that SpeziLLM was using previously, and then reverting the changes around the initialisers accordingly.

/// Alias of the OpenAI `JSONSchema/Property` type, describing properties within an object schema.
public typealias LLMFunctionParameterPropertySchema = ChatQuery.ChatCompletionToolParam.FunctionDefinition.FunctionParameters.Property
/// Alias of the OpenAI `JSONSchema/Item` type, describing array items within an array schema.
public typealias LLMFunctionParameterItemSchema = ChatQuery.ChatCompletionToolParam.FunctionDefinition.FunctionParameters.Property.Items

I was not able to find a definitive documentation for the fileds that the OpenAI API accepts here, including the ones that are currently support by SpeziLLM, e.g., minItems, maxItems, uniqueItems.

The function calling documentation mentions none of them: https://platform.openai.com/docs/guides/function-calling

What do you think?

Sources/SpeziLLMOpenAI/LLMOpenAIError.swift Show resolved Hide resolved
Sources/SpeziLLMOpenAI/LLMOpenAISession.swift Show resolved Hide resolved
@paulhdk paulhdk changed the title SpeziLLMOpenAI: Repalce MacPaw/OpenAI With Generated API Calls S Oct 24, 2024
@paulhdk paulhdk changed the title S SpeziLLMOpenAI: Repalce MacPaw/OpenAI With Generated API Calls Oct 24, 2024
@paulhdk paulhdk force-pushed the generate-api-calls branch from bdd7127 to 1f52a7f Compare October 28, 2024 08:54
Copy link
Member

@PSchmiedmayer PSchmiedmayer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for continuing to work on this @paulhdk!

I had a quick sync with @philippzagar and he will take a closer look at the PR to provide insights on the different changes; would be great to update the PR to the latest version of main to resolve the conflicts; I think after the feedback from @philippzagar we should be ready to get this merged 🚀

Comment on lines 38 to 56
.Input(body: .json(LLMOpenAIRequestType(
messages: openAIContext,
model: schema.parameters.modelType,
frequency_penalty: schema.modelParameters.frequencyPenalty,
logit_bias: schema.modelParameters.logitBias.additionalProperties.isEmpty ? nil : schema
.modelParameters
.logitBias,
max_tokens: schema.modelParameters.maxOutputLength,
n: schema.modelParameters.completionsPerOutput,
presence_penalty: schema.modelParameters.presencePenalty,
response_format: schema.modelParameters.responseFormat,
seed: schema.modelParameters.seed,
stop: LLMOpenAIRequestType.stopPayload.case2(schema.modelParameters.stopSequence),
stream: true,
temperature: schema.modelParameters.temperature,
top_p: schema.modelParameters.topP,
tools: functions.isEmpty ? nil : functions,
user: schema.modelParameters.user
)))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be nice to format this similar to our other code bases; might be applicable to other parts as well:

Suggested change
.Input(body: .json(LLMOpenAIRequestType(
messages: openAIContext,
model: schema.parameters.modelType,
frequency_penalty: schema.modelParameters.frequencyPenalty,
logit_bias: schema.modelParameters.logitBias.additionalProperties.isEmpty ? nil : schema
.modelParameters
.logitBias,
max_tokens: schema.modelParameters.maxOutputLength,
n: schema.modelParameters.completionsPerOutput,
presence_penalty: schema.modelParameters.presencePenalty,
response_format: schema.modelParameters.responseFormat,
seed: schema.modelParameters.seed,
stop: LLMOpenAIRequestType.stopPayload.case2(schema.modelParameters.stopSequence),
stream: true,
temperature: schema.modelParameters.temperature,
top_p: schema.modelParameters.topP,
tools: functions.isEmpty ? nil : functions,
user: schema.modelParameters.user
)))
.Input(body:
.json(
LLMOpenAIRequestType(
messages: openAIContext,
model: schema.parameters.modelType,
frequency_penalty: schema.modelParameters.frequencyPenalty,
logit_bias: schema.modelParameters.logitBias.additionalProperties.isEmpty ? nil : schema
.modelParameters
.logitBias,
max_tokens: schema.modelParameters.maxOutputLength,
n: schema.modelParameters.completionsPerOutput,
presence_penalty: schema.modelParameters.presencePenalty,
response_format: schema.modelParameters.responseFormat,
seed: schema.modelParameters.seed,
stop: LLMOpenAIRequestType.stopPayload.case2(schema.modelParameters.stopSequence),
stream: true,
temperature: schema.modelParameters.temperature,
top_p: schema.modelParameters.topP,
tools: functions.isEmpty ? nil : functions,
user: schema.modelParameters.user
)
)
)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree with @PSchmiedmayer here!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this coding style being enforced in the other code bases?

I played around with some Swiftformat rules yesterday while trying to make it produce the example above. But I was unsuccessful and had to update it manually. Is there a better way to catch all the remaining examples in the PR?

Copy link
Member

@philippzagar philippzagar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the great PR @paulhdk, it's great to move away from the OpenAI package! 🚀
Had some feedback on some parts of the PR, please also clean up the leftover compile errors in the testing code as well as possible linter issues.
Also, it would be great to get rid of some Swift warnings / deprecations that are tagged via the CI!

One more thing: Please add a small documentation (in the README and DocC) on how to (re-)generate the client (needed if the API changes at some point).

Sources/SpeziLLM/Models/LLMContextEntity.swift Outdated Show resolved Hide resolved
Comment on lines +31 to +34
/// "firstName": [
/// "type": "string",
/// "description": "The first name of the person")
/// ],
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I most definitely agree with @PSchmiedmayer here, I would follow the typical Swift paradigm and provide as much type safety as possible.
As mentioned by @PSchmiedmayer, I would implement well-typed inits / functions etc. that then map to the underlying String dictionary. And yes, an escape hatch that passes the raw dictionary might be beneficial!

"required": requiredPropertyNames
])
} catch {
logger.error("Error creating OpenAPIObjectContainer.")
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is a fallthrough error handeling really beneficial here?
To be fair, errors should be very rare here, only when the validated container cannot be created

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fatalError(), then? Or can you think of a nicer way to handle this?

XCTAssertEqual(schemaArrayBool.schema.items?.type, .boolean)
XCTAssertEqual(schemaArrayBool.schema.items?.const, "true")
schema = schemaArrayBool.schema.value
items = schema["items"] as? [String: Any]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a bit ugly that we loose all underlying type information of the respective properties of the schema here.. Mabye some computed properties on the schema with a specific type would make sense?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree. The problem is that the underlying type for the schema is the generated OpenAPIRuntime.OpenAPIObjectContainer type, which boils down to a [String: (any Sendable)?] type and is what we get back from the API / generated calls.

Which schema are you referring to here? I don't think I'm following where you would want that computed property to be.

])
} catch {
print("unable to initialse schema in LLMOpenAIParameterCustomTypesTets")
return .init()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens down the line if the validation fails and an empty schema is created?

Copy link
Contributor Author

@paulhdk paulhdk Dec 21, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shouldn't something like this do the trick?

diff --git a/Tests/SpeziLLMTests/LLMOpenAIParameterTests+CustomTypes.swift b/Tests/SpeziLLMTests/LLMOpenAIParameterTests+CustomTypes.swift
index ca262bd..3bbce43 100644
--- a/Tests/SpeziLLMTests/LLMOpenAIParameterTests+CustomTypes.swift
+++ b/Tests/SpeziLLMTests/LLMOpenAIParameterTests+CustomTypes.swift
@@ -11,6 +11,10 @@ import XCTest


 final class LLMOpenAIParameterCustomTypesTests: XCTestCase {
+    override func setUp() {
+        continueAfterFailure = false
+    }
+
     struct CustomType: LLMFunctionParameterArrayElement, Encodable, Equatable {
         static var itemSchema: LLMFunctionParameterItemSchema = {
             do {
@@ -28,8 +32,7 @@ final class LLMOpenAIParameterCustomTypesTests: XCTestCase {
                     ]
                 ])
             } catch {
-                print("unable to initialse schema in LLMOpenAIParameterCustomTypesTets")
-                return .init()
+                XCTFail("Unable to initialse schema in LLMOpenAIParameterCustomTypesTets")
             }
         }()

The compiler is still expecting a return after the XCTFail(). Do you have another suggestion?

@vishnuravi vishnuravi changed the title SpeziLLMOpenAI: Repalce MacPaw/OpenAI With Generated API Calls SpeziLLMOpenAI: Replace MacPaw/OpenAI With Generated API Calls Nov 22, 2024
@PSchmiedmayer
Copy link
Member

@paulhdk A short ping on the state of this PR; would be amazing to get it merged before we tag a new 0.X version tag in combination with the MLX changes 🚀

@paulhdk
Copy link
Contributor Author

paulhdk commented Dec 6, 2024

First of all, thank you @philippzagar for the review!

@paulhdk A short ping on the state of this PR; would be amazing to get it merged before we tag a new 0.X version tag in combination with the MLX changes 🚀

I"m OOF until the end of the week. Will be able to address all the feedback here with full force starting next week Monday.

I'm sorry for letting this sit so long, thaz wasn't my plan either ..

@PSchmiedmayer
Copy link
Member

Thank you @paulhdk! 🚀

@paulhdk
Copy link
Contributor Author

paulhdk commented Dec 16, 2024

@philippzagar
Copy link
Member

@paulhdk Great to see tons of progress on the PR! 🥳
Let me know as soon as you're ready for another review, would be great to get all of these changes in soon! 🚀

@philippzagar
Copy link
Member

Hey @paulhdk,
just following up here: Can you give us an estimate when you will be able to finish the PR? I guess there shouldn't be too much missing?
It would be great to get all of these changes in soon, also in order to unblock other related issues / PRs 🚀

@paulhdk
Copy link
Contributor Author

paulhdk commented Dec 21, 2024

Hey @paulhdk, just following up here: Can you give us an estimate when you will be able to finish the PR? I guess there shouldn't be too much missing? It would be great to get all of these changes in soon, also in order to unblock other related issues / PRs 🚀

I've been sick all week; today is the first day where I feel ready to do some work.
My personal goal would be to address all the review comments before Christmas.

The wrapper types suggested by you and @PSchmiedmayer are still missing and will require some thinking on my part. Doing my best to get all of this done ASAP.

@paulhdk
Copy link
Contributor Author

paulhdk commented Dec 22, 2024

@paulhdk Great to see tons of progress on the PR! 🥳 Let me know as soon as you're ready for another review, would be great to get all of these changes in soon! 🚀

@philippzagar I've left questions for the remaining comments above where I need your help. Thank you for taking another look!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
Status: In Progress
Development

Successfully merging this pull request may close these issues.

Use OpenAPI generator from the OpenAI API spec
3 participants