-
-
Notifications
You must be signed in to change notification settings - Fork 8.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added openai gpt4o mini model #1814
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Reviewed everything up to 6256a11 in 42 seconds
More details
- Looked at
67
lines of code in3
files - Skipped
0
files when reviewing. - Skipped posting
1
drafted comments based on config settings.
1. lib/chat-setting-limits.ts:160
- Draft comment:
The addition ofgpt-4o-mini
with aMAX_TEMPERATURE
of 2.0 seems inconsistent with other models of similar capacity. Typically, models in this range have aMAX_TEMPERATURE
of 1.0. Please verify if this is intended or if it should be aligned with similar models.
MAX_TEMPERATURE: 1.0,
- Reason this comment was not posted:
Confidence of 0% on close inspection, compared to threshold of 50%.
Workflow ID: wflow_v6463Dz8m4qzggNs
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
app/api/chat/openai/route.ts
Outdated
@@ -30,7 +30,8 @@ export async function POST(request: Request) { | |||
temperature: chatSettings.temperature, | |||
max_tokens: | |||
chatSettings.model === "gpt-4-vision-preview" || | |||
chatSettings.model === "gpt-4o" | |||
chatSettings.model === "gpt-4o" || | |||
chatSettings.model === "gpt-4o-mini" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mini supports 16k output tokens - would this limit it to 4k?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it will.
It should be
const response = await openai.chat.completions.create({
model: chatSettings.model as ChatCompletionCreateParamsBase["model"],
messages: messages as ChatCompletionCreateParamsBase["messages"],
temperature: chatSettings.temperature,
max_tokens:
chatSettings.model === "gpt-4-vision-preview" ||
chatSettings.model === "gpt-4o"
? 4096
: chatSettings.model === "gpt-4o-mini"
? 16383
: null,
stream: true
});
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it's fixed! Thanks.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to fix MAX_TOKEN_OUTPUT_LENGTH
for gpt-4o mini in lib/chat-setting-limits.ts
from 4096 tokens to 16383.
Everything else is good:)
lib/chat-setting-limits.ts
Outdated
"gpt-4o-mini": { | ||
MIN_TEMPERATURE: 0.0, | ||
MAX_TEMPERATURE: 2.0, | ||
MAX_TOKEN_OUTPUT_LENGTH: 4096, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By the way MAX_TOKEN_OUTPUT_LENGTH: 4096
will also limit gpt-4o mini maximum response length to 4k tokens, it should be 16383 tokens
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just updated it! thanks 🙌
update output token in gpt4o mini
@mckaywrigley please merge this |
blog post
Summary:
Added support for the new OpenAI GPT-4o Mini model, including chat setting limits, model definition, and type updates.
Key points:
gpt-4o-mini
model toCHAT_SETTING_LIMITS
inlib/chat-setting-limits.ts
with specific temperature, token output length, and context length limits.GPT4oMini
model definition tolib/models/llm/openai-llm-list.ts
with pricing and other details.OpenAILLMID
type intypes/llms.ts
to includegpt-4o-mini
.Generated with ❤️ by ellipsis.dev