Skip to content

Conversation

@Quazistax
Copy link

@Quazistax Quazistax commented Nov 29, 2025

Maybe fixes #8773 , with 2nd commit might help with #8933 too (I tried it with VS Code, not with JetBrains)

Description

  • to handle function_call messages loaded from session file
  • use call_id as item id if response id is not available
  • ensure function_call item id is prefixed with fc

AI Code Review

  • Team members only: AI review runs automatically when PR is opened or marked ready for review
  • Team members can also trigger a review by commenting @continue-review

Checklist

  • [] I've read the contributing guide
  • [] The relevant docs, if any, have been updated or created
  • [] The relevant tests, if any, have been updated or created

Screen recording or screenshot

[ When applicable, please include a short screen recording or screenshot - this makes it much easier for us as contributors to review and understand your changes. See this PR as a good example. ]

Tests

[ What tests were added or updated to ensure the changes work as expected? ]


Summary by cubic

Fixes function_call mapping in OpenAI responses to emit valid items from session-loaded messages with stable, prefixed IDs and preserve original per-tool item IDs. Addresses #8773.

  • Bug Fixes
    • Emit one function_call per tool call, preferring responsesOutputItemId; fallback to respId or call_id; ensure IDs are prefixed with fc_.
    • Propagate responsesOutputItemId through tool call state during streaming.
    • Fallback to an assistant message when no toolCalls exist but respId is present.

Written for commit c4ed9e4. Summary will update automatically on new commits.

* to handle function_call messages loaded from session file
* use call_id as item id if response id is not available
* ensure function_call item id is prefixed with fc
@Quazistax Quazistax requested a review from a team as a code owner November 29, 2025 12:43
@Quazistax Quazistax requested review from RomneyDa and removed request for a team November 29, 2025 12:43
@continue
Copy link
Contributor

continue bot commented Nov 29, 2025

Keep this PR in a mergeable state →

Learn more

All Green is an AI agent that automatically:

✅ Addresses code review comments

✅ Fixes failing CI checks

✅ Resolves merge conflicts

@dosubot dosubot bot added the size:M This PR changes 30-99 lines, ignoring generated files. label Nov 29, 2025
@github-actions
Copy link

github-actions bot commented Nov 29, 2025

All contributors have signed the CLA ✍️ ✅
Posted by the CLA Assistant Lite bot.

@continue
Copy link
Contributor

continue bot commented Nov 29, 2025

Reviewed the changes - no documentation updates needed. This PR fixes internal OpenAI Responses API type conversion logic for function_call messages loaded from session files. The changes are purely implementation details that don't affect any user-facing features, configuration options, or documented behavior.

@Quazistax
Copy link
Author

I have read the CLA Document and I hereby sign the CLA

Copy link
Contributor

@cubic-dev-ai cubic-dev-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No issues found across 1 file

* synthesize a single orchestrator function_call when multiple tool calls
   follow a reasoning (thinking) message to avoid protocol violations
@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. and removed size:M This PR changes 30-99 lines, ignoring generated files. labels Dec 2, 2025
Copy link
Collaborator

@RomneyDa RomneyDa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Quazistax appreciate the contribution. This seems like a good change, I added some nitpick comments. Let's add some unit tests for these conversions

const wrapperItemId =
rawWrapperId && rawWrapperId.startsWith("fc")
? rawWrapperId
: `fc_${rawWrapperId}`;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this tool call ID modification would mess things up in GUI where we use tool call ID for streaming tool call ouput and a variety of things. If it doesn't match the original many GUI things will fail

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This only affects what is sent to LLM, GUI ids stay unchanged.

const shouldSynthesizeOrchestrator =
toolCalls.length > 1 && !!prevReasoningId;

if (shouldSynthesizeOrchestrator) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consider restructuring so that synthesizeOrchestratorCall has pure inputs and outputs rather than directly modifying

const rawArgs = tc?.function?.arguments as string | undefined;
let parsedArgs: any = {};
try {
parsedArgs = rawArgs && rawArgs.length ? JSON.parse(rawArgs) : {};
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can use existing safeParseArgs utils for this

}

// Emit function_call items directly from toolCalls
emitFunctionCallsFor(toolCalls, input, respId, prevReasoningId);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

consider restructuring so that emitFunctionCallsFor has pure inputs and outputs rather than directly modifying

const respId = msg.metadata?.responsesOutputItemId as
| string
| undefined;
const prevMsgForThis = messages[i - 1] as ChatMessage | undefined;
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this type casting seems unnecessary

}

// Emit function_call items directly from toolCalls
emitFunctionCallsFor(toolCalls, input, respId, prevReasoningId);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this be if else behavior? I.e. if should snythesize, do that, else emit function calls directly? Or I might be misunderstanding

@github-project-automation github-project-automation bot moved this from Todo to In Progress in Issues and PRs Dec 3, 2025
* (vscode) ensure tool calls stored in chat "assistant" message store
  original item id their function_calls arrived with
* (openai) handle function_call messages loaded from old session file
  by creating item id from call id if there is no original item id
* (openai) emit all function_call items when sending response
@dosubot dosubot bot added size:M This PR changes 30-99 lines, ignoring generated files. and removed size:L This PR changes 100-499 lines, ignoring generated files. labels Dec 6, 2025
@Quazistax
Copy link
Author

I investigated some more. Although previous "hack" with additional "wrapper" function call worked, it does not address the real issue and has some minor side effects.
The real issue is that exact items generated by reasoning were not sent back to OpenAI. First, not all function calls were sent back in response. Second, looks like there is some interconnecting logic, either by information in encrypted_content or in reasoning id ("rs_...") by which OpenAI knows which following function_call ids ("fc_...") it expects.

Legacy logic in gui/src/redux/slices/sessionSlice.ts puts all consecutive function_call OpenAI messages under one "assistant" message that keeps only one "fc_" id (from the last function_call).
This change (commit 3) ensures "fc_" id is remembered for each function call and used to emit the same ones in response. It does not handle old session files containing reasoning multi-tool messages, new session needs to be started.

Note, looks like sessionSlice.ts is only used in vscode implementation, so other (CLI, jetbrains) still need to be checked/updated.
Also needs to be checked if it impacts other LLM models (if they use core/llm/openaiTypeConverters.ts).


About previous "wrapper" hack/workaround, turns out it does not matter what are the arguments. Can be dummy function_call with empty arguments just to send old "fc_" item id in response.

Side effects - AI can spend some thinking time confused, observing that it called more function calls than intended - but looks like that is only if applied to current reasoning response, I did not notice side effects when applying workaround on older reasoning messages.

If it is wanted to have a workaround for old sessions containing reasoning multi-tool messages, I can upload that too.

Copy link
Collaborator

@RomneyDa RomneyDa left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Quazistax see comments about removed mode. I haven't looked closely at the core code yet

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:M This PR changes 30-99 lines, ignoring generated files.

Projects

Status: In Progress

Development

Successfully merging this pull request may close these issues.

No tool call found for function call output with call_id

2 participants