-
Notifications
You must be signed in to change notification settings - Fork 3.9k
fix(openai): responses function_call mapping #8935
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
fix(openai): responses function_call mapping #8935
Conversation
* to handle function_call messages loaded from session file * use call_id as item id if response id is not available * ensure function_call item id is prefixed with fc
|
Keep this PR in a mergeable state → Learn moreAll Green is an AI agent that automatically: ✅ Addresses code review comments ✅ Fixes failing CI checks ✅ Resolves merge conflicts |
|
All contributors have signed the CLA ✍️ ✅ |
|
Reviewed the changes - no documentation updates needed. This PR fixes internal OpenAI Responses API type conversion logic for function_call messages loaded from session files. The changes are purely implementation details that don't affect any user-facing features, configuration options, or documented behavior. |
|
I have read the CLA Document and I hereby sign the CLA |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No issues found across 1 file
* synthesize a single orchestrator function_call when multiple tool calls follow a reasoning (thinking) message to avoid protocol violations
RomneyDa
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Quazistax appreciate the contribution. This seems like a good change, I added some nitpick comments. Let's add some unit tests for these conversions
core/llm/openaiTypeConverters.ts
Outdated
| const wrapperItemId = | ||
| rawWrapperId && rawWrapperId.startsWith("fc") | ||
| ? rawWrapperId | ||
| : `fc_${rawWrapperId}`; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this tool call ID modification would mess things up in GUI where we use tool call ID for streaming tool call ouput and a variety of things. If it doesn't match the original many GUI things will fail
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This only affects what is sent to LLM, GUI ids stay unchanged.
core/llm/openaiTypeConverters.ts
Outdated
| const shouldSynthesizeOrchestrator = | ||
| toolCalls.length > 1 && !!prevReasoningId; | ||
|
|
||
| if (shouldSynthesizeOrchestrator) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consider restructuring so that synthesizeOrchestratorCall has pure inputs and outputs rather than directly modifying
core/llm/openaiTypeConverters.ts
Outdated
| const rawArgs = tc?.function?.arguments as string | undefined; | ||
| let parsedArgs: any = {}; | ||
| try { | ||
| parsedArgs = rawArgs && rawArgs.length ? JSON.parse(rawArgs) : {}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can use existing safeParseArgs utils for this
core/llm/openaiTypeConverters.ts
Outdated
| } | ||
|
|
||
| // Emit function_call items directly from toolCalls | ||
| emitFunctionCallsFor(toolCalls, input, respId, prevReasoningId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
consider restructuring so that emitFunctionCallsFor has pure inputs and outputs rather than directly modifying
core/llm/openaiTypeConverters.ts
Outdated
| const respId = msg.metadata?.responsesOutputItemId as | ||
| | string | ||
| | undefined; | ||
| const prevMsgForThis = messages[i - 1] as ChatMessage | undefined; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this type casting seems unnecessary
core/llm/openaiTypeConverters.ts
Outdated
| } | ||
|
|
||
| // Emit function_call items directly from toolCalls | ||
| emitFunctionCallsFor(toolCalls, input, respId, prevReasoningId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be if else behavior? I.e. if should snythesize, do that, else emit function calls directly? Or I might be misunderstanding
* (vscode) ensure tool calls stored in chat "assistant" message store original item id their function_calls arrived with * (openai) handle function_call messages loaded from old session file by creating item id from call id if there is no original item id * (openai) emit all function_call items when sending response
|
I investigated some more. Although previous "hack" with additional "wrapper" function call worked, it does not address the real issue and has some minor side effects. Legacy logic in gui/src/redux/slices/sessionSlice.ts puts all consecutive function_call OpenAI messages under one "assistant" message that keeps only one "fc_" id (from the last function_call). Note, looks like sessionSlice.ts is only used in vscode implementation, so other (CLI, jetbrains) still need to be checked/updated. About previous "wrapper" hack/workaround, turns out it does not matter what are the arguments. Can be dummy function_call with empty arguments just to send old "fc_" item id in response. Side effects - AI can spend some thinking time confused, observing that it called more function calls than intended - but looks like that is only if applied to current reasoning response, I did not notice side effects when applying workaround on older reasoning messages. If it is wanted to have a workaround for old sessions containing reasoning multi-tool messages, I can upload that too. |
RomneyDa
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Quazistax see comments about removed mode. I haven't looked closely at the core code yet
Maybe fixes #8773 , with 2nd commit might help with #8933 too (I tried it with VS Code, not with JetBrains)
Description
AI Code Review
@continue-reviewChecklist
Screen recording or screenshot
[ When applicable, please include a short screen recording or screenshot - this makes it much easier for us as contributors to review and understand your changes. See this PR as a good example. ]
Tests
[ What tests were added or updated to ensure the changes work as expected? ]
Summary by cubic
Fixes function_call mapping in OpenAI responses to emit valid items from session-loaded messages with stable, prefixed IDs and preserve original per-tool item IDs. Addresses #8773.
Written for commit c4ed9e4. Summary will update automatically on new commits.