-
Notifications
You must be signed in to change notification settings - Fork 844
fix(openai-agents): apply content tracing flag to content #3487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Warning Rate limit exceeded@duanyutong has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 26 minutes and 45 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (5)
WalkthroughAdds a runtime gate for emitting prompt/content attributes: Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Suggested reviewers
Poem
Pre-merge checks and finishing touches❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Important
Looks good to me! 👍
Reviewed everything up to 77fa6f4 in 1 minute and 43 seconds. Click for details.
- Reviewed
134lines of code in3files - Skipped
0files when reviewing. - Skipped posting
8draft comments. View those below. - Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:298
- Draft comment:
Guarding content logging by checking should_send_prompts() ensures that sensitive prompt data is only recorded when permitted, aligning this behavior with the OpenAI instrumentation. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative and does not provide any actionable feedback or suggestions for improvement. It simply states that the current implementation aligns with OpenAI instrumentation, which is not necessary for the PR author to know.
2. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:399
- Draft comment:
The added check with should_send_prompts() before processing 'output.content' prevents logging of content when tracing is disabled, maintaining consistency with other parts of the code. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative, as it only praises the addition of a check without providing any actionable feedback or suggestions for improvement. It does not align with the guidelines for useful comments.
3. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:430
- Draft comment:
Adding the should_send_prompts() condition on the branch handling direct text output safeguards against unintended logging when content tracing is disabled. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative and does not provide any actionable feedback or suggestions for improvement. It simply states that a change is good, which violates the rule against making purely informative comments.
4. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:466
- Draft comment:
Guarding the legacy prompt extraction with should_send_prompts() ensures that prompt data from input is only logged when allowed, enhancing consistency across the tracing logic. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative, as it praises the use ofshould_send_prompts()without suggesting any changes or improvements. It doesn't ask for confirmation or suggest any specific action.
5. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:508
- Draft comment:
Applying the should_send_prompts() check here before processing output content ensures that sensitive response content is only recorded when appropriate. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative and does not provide any actionable feedback or suggestions for improvement. It simply acknowledges a good practice without offering any specific advice or raising any concerns.
6. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:539
- Draft comment:
The should_send_prompts() condition now also guards the branch for direct text outputs, promoting consistent control over sensitive data logging. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative, as it praises a change without providing any actionable feedback or suggestions. It does not align with the rules for useful comments.
7. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py:23
- Draft comment:
Refactoring the should_send_prompts() function with explicit type hints and applying _is_truthy uniformly improves clarity and consistency. This change ensures broader support for various truthy values. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative, praising the refactoring without providing any actionable feedback or suggestions. It doesn't ask for confirmation or suggest improvements, so it doesn't align with the rules for useful comments.
8. packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py:181
- Draft comment:
Aligning the should_send_prompts() implementation to use _is_truthy for both the environment variable and the context override brings consistency with the openai-agents instrumentation. - Reason this comment was not posted:
Comment did not seem useful. Confidence is useful =0%<= threshold50%This comment is purely informative, explaining the reason behind a change without suggesting any action or asking for confirmation. It doesn't align with the rules for useful comments.
Workflow ID: wflow_nroeXO8JV5fdsBQi
You can customize by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (1)
399-443: Consider refactoring duplicated response processing logic.There is significant code duplication between lines 399-443 and 508-552. Both blocks handle response output processing (extracting content from
ResponseOutputMessage, tool calls, and direct text content) with nearly identical logic. Consider extracting this into a helper method to improve maintainability.Note: This is pre-existing duplication, not introduced by this PR.
Example refactoring approach:
def _process_response_output(self, otel_span, response, prefix_attr): """Extract and set response output attributes.""" if not (hasattr(response, 'output') and response.output): return for i, output in enumerate(response.output): if should_send_prompts() and hasattr(output, 'content') and output.content: # Text message with content array (ResponseOutputMessage) content_text = "".join( content_item.text for content_item in output.content if hasattr(content_item, 'text') ) if content_text: otel_span.set_attribute(f"{prefix_attr}.{i}.content", content_text) otel_span.set_attribute(f"{prefix_attr}.{i}.role", getattr(output, 'role', 'assistant')) # ... rest of logicAlso applies to: 508-552
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (3)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py(7 hunks)packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py(2 hunks)packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py(2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules
Files:
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.pypackages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.pypackages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🧬 Code graph analysis (3)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)
dont_throw(52-78)should_send_prompts(23-30)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)
should_send_prompts(181-188)_is_truthy(177-178)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (14)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)
_is_truthy(19-20)should_send_prompts(23-30)packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/utils.py (1)
should_send_prompts(49-52)packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/utils.py (1)
should_send_prompts(32-35)packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/utils.py (1)
should_send_prompts(28-31)packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/utils.py (1)
should_send_prompts(25-28)packages/opentelemetry-instrumentation-cohere/opentelemetry/instrumentation/cohere/utils.py (1)
should_send_prompts(38-41)packages/opentelemetry-instrumentation-vertexai/opentelemetry/instrumentation/vertexai/utils.py (1)
should_send_prompts(11-14)packages/opentelemetry-instrumentation-bedrock/opentelemetry/instrumentation/bedrock/utils.py (1)
should_send_prompts(36-39)packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/utils.py (1)
should_send_prompts(14-17)packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/utils.py (1)
should_send_prompts(11-14)packages/opentelemetry-instrumentation-mistralai/opentelemetry/instrumentation/mistralai/utils.py (1)
should_send_prompts(11-14)packages/opentelemetry-instrumentation-replicate/opentelemetry/instrumentation/replicate/utils.py (1)
should_send_prompts(11-14)packages/opentelemetry-instrumentation-transformers/opentelemetry/instrumentation/transformers/utils.py (1)
should_send_prompts(11-14)packages/opentelemetry-instrumentation-haystack/opentelemetry/instrumentation/haystack/utils.py (1)
should_send_prompts(21-24)
🔇 Additional comments (6)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)
18-18: LGTM: Private constant follows naming conventions.The constant is appropriately prefixed with
_to indicate internal use.
181-188: Verify consistency with other instrumentation packages before merging.This PR updates
should_send_prompts()in the openai and openai-agents packages to use_is_truthy()for both environment variable and override checks. Before merging, confirm:
- Whether other instrumentation packages (langchain, llamaindex, anthropic, groq, cohere, vertexai, bedrock, watsonx, ollama, mistralai, replicate, transformers, haystack) use a different pattern for the same function
- If an inconsistency exists, decide whether to:
- Update all packages in this PR
- Create a follow-up issue to standardize across all packages
- Document why only these two packages are being updated
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)
9-9: LGTM: Consistent with openai package.The private constant naming is consistent with the openai package implementation.
23-30: LGTM: Implementation consistent with openai package.The function signature, docstring, and implementation are now consistent with the openai package. The use of
_is_truthy()for both environment variable and context override provides uniform truthiness evaluation.packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (2)
14-14: LGTM: Import enables content tracing control.The import of
should_send_promptsenables the gating mechanism for content tracing as intended by this PR.
301-301: LGTM: Content tracing guards correctly implemented.The
should_send_prompts()guards are correctly placed to control content and prompt attribute emission. The implementation correctly checks the flag before setting content-related attributes across all code paths.Also applies to: 402-402, 433-433, 469-469, 511-511, 542-542
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
Show resolved
Hide resolved
77fa6f4 to
d034a5c
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (1)
301-305: Content emission is correctly gated; consider cachingshould_send_prompts()per span.The new checks around prompt and completion content ensure that only when
should_send_prompts()evaluates truthy do you emit potentially sensitive text, while still recording non-content metadata (roles, usage, etc.) unconditionally. This aligns with the stated PR goal of applying the content tracing flag across Agents paths.For a small perf/clarity win, you could compute the flag once in
on_span_endand reuse it instead of callingshould_send_prompts()multiple times inside loops:@@ - if span in self._otel_spans: - otel_span = self._otel_spans[span] - span_data = getattr(span, 'span_data', None) - if span_data and ( + if span in self._otel_spans: + otel_span = self._otel_spans[span] + span_data = getattr(span, 'span_data', None) + send_prompts = should_send_prompts() + if span_data and ( @@ - # Set content attribute - if should_send_prompts() and content is not None: + # Set content attribute + if send_prompts and content is not None: @@ - # Handle different output types - if should_send_prompts() and hasattr(output, 'content') and output.content: + # Handle different output types + if send_prompts and hasattr(output, 'content') and output.content: @@ - elif should_send_prompts() and hasattr(output, 'text'): + elif send_prompts and hasattr(output, 'text'): @@ - input_data = getattr(span_data, 'input', []) - if should_send_prompts() and input_data: + input_data = getattr(span_data, 'input', []) + if send_prompts and input_data: @@ - # Handle different output types - if should_send_prompts() and hasattr(output, 'content') and output.content: + # Handle different output types + if send_prompts and hasattr(output, 'content') and output.content: @@ - elif should_send_prompts() and hasattr(output, 'text'): + elif send_prompts and hasattr(output, 'text'):This keeps behavior identical while avoiding repeated env/context lookups inside the hot loops.
Also applies to: 402-415, 433-439, 469-484, 511-524, 542-548
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (4)
README.md(1 hunks)packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py(7 hunks)packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py(2 hunks)packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules
Files:
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.pypackages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🧠 Learnings (1)
📚 Learning: 2025-12-02T21:09:48.690Z
Learnt from: duanyutong
Repo: traceloop/openllmetry PR: 3487
File: packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py:177-178
Timestamp: 2025-12-02T21:09:48.690Z
Learning: The opentelemetry-instrumentation-openai and opentelemetry-instrumentation-openai-agents packages must remain independent and not share code, so code duplication between them is acceptable.
Applied to files:
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🧬 Code graph analysis (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (2)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (1)
should_send_prompts(23-30)packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1)
should_send_prompts(181-188)
🔇 Additional comments (3)
README.md (1)
183-192: Verify no duplicate test setup instructions elsewhere in the README.The summary indicates this pre-test setup instruction block is inserted in two locations within the Contributing section, duplicating the guidance. However, only one instance is visible in the provided code at lines 183-192. This discrepancy should be verified to ensure no unintended duplication exists elsewhere in the file.
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1)
177-188: Content-tracing flag logic looks solid; confirm intended OR semantics and default.
_is_truthy()gives robust normalization for both the env var and the trace-context override, andshould_send_prompts()now behaves consistently with the agents-side helper. One subtle point: withenv_settingdefaulting to"true"and usingOR, the context override can enable tracing when the env is falsy but cannot disable it when the env (or default) is truthy. Please confirm this precedence and default-on behavior is intentional for your privacy/config story.packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (1)
14-14: Importingshould_send_promptsinto hooks is appropriate.Pulling in
should_send_promptsfrom.utilscleanly wires the hooks into the shared content-tracing decision without expanding this module’s responsibilities.
32e4fc9 to
8f64e4e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (1)
19-20: Code duplication is acceptable here.The
_is_truthyfunction is duplicated from the openai package, which is acceptable since these packages must remain independent. Based on learnings, no shared utility extraction is needed.
🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1)
18-18: Inconsistent constant naming between packages.This file uses
TRACELOOP_TRACE_CONTENT(public), while theopenai_agentspackage uses_TRACELOOP_TRACE_CONTENT(private, with underscore prefix). The PR description mentions renaming this constant for consistency, but it appears the rename wasn't applied here.-TRACELOOP_TRACE_CONTENT = "TRACELOOP_TRACE_CONTENT" +_TRACELOOP_TRACE_CONTENT = "TRACELOOP_TRACE_CONTENT"Then update the reference on line 186 accordingly.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
Disabled knowledge base sources:
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (4)
README.md(1 hunks)packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py(9 hunks)packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py(2 hunks)packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- README.md
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py
📄 CodeRabbit inference engine (CLAUDE.md)
**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules
Files:
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.pypackages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.pypackages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py
🧠 Learnings (1)
📚 Learning: 2025-12-02T21:09:48.690Z
Learnt from: duanyutong
Repo: traceloop/openllmetry PR: 3487
File: packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py:177-178
Timestamp: 2025-12-02T21:09:48.690Z
Learning: The opentelemetry-instrumentation-openai and opentelemetry-instrumentation-openai-agents packages must remain independent and not share code, so code duplication between them is acceptable.
Applied to files:
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🧬 Code graph analysis (2)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (2)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)
dont_throw(52-78)should_send_prompts(23-30)packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)
dont_throw(132-160)should_send_prompts(181-188)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)
should_send_prompts(181-188)_is_truthy(177-178)
🔇 Additional comments (12)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)
186-188: Potential logic issue: override cannot disable content tracing.The
oroperator means content will be traced if either condition is truthy. Sinceenv_settingdefaults to"true", setting the override toFalsewon't disable tracing. If the intent is that the override should be able to disable tracing when explicitly set tofalse, consider:- return _is_truthy(env_setting) or _is_truthy(override) + if override is not None: + return _is_truthy(override) + return _is_truthy(env_setting)If the current behavior (override can only enable, never disable) is intentional, please add a clarifying comment.
177-178: LGTM!The
_is_truthyhelper correctly normalizes various input types (None, bool, string) for truthiness checks.packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)
9-10: LGTM!The constant is appropriately marked as private with the underscore prefix.
28-30: Same logic concern: override cannot disable content tracing.Same issue as in the openai package—the
oroperator means the override can only enable tracing, not disable it when the environment variable defaults to"true". If both packages should behave identically (which they do now), consider whether this is the intended behavior.packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (8)
236-236: LGTM!Good pattern: calling
should_send_prompts()once and storing in aFinalvariable ensures consistent behavior throughout the span lifecycle and avoids repeated function calls.
345-349: Consider gating tool call arguments as content.Tool call
argumentsmay contain sensitive user data passed to functions. These are emitted without theshould_trace_contentguard, unlike prompt and response content. If tool arguments should be treated as content for privacy purposes, they should also be gated:- if tool_call.get('arguments'): + if should_trace_content and tool_call.get('arguments'): args = tool_call['arguments'] if not isinstance(args, str): args = json.dumps(args) otel_span.set_attribute(f"{prefix}.tool_calls.{j}.arguments", args)The same consideration applies to lines 420-432 and 528-541 where tool call attributes are set.
3-3: LGTM!The
Finalimport supports the type annotation on line 236, correctly documenting thatshould_trace_contentwon't be reassigned.
14-14: LGTM!Import of
should_send_promptsfrom the local utils module enables the content tracing gate functionality.
302-305: Content gating correctly applied to prompt content.The guard prevents emission of potentially sensitive prompt content when content tracing is disabled.
403-415: Content gating correctly applied to response outputs.Both structured content (line 403) and direct text output (line 434) are properly guarded.
Also applies to: 434-439
470-484: Content gating correctly applied to legacy input path.The legacy fallback path for input content is properly gated with
should_trace_content.
512-524: Content gating correctly applied to legacy response path.Both structured content (line 512) and direct text output (line 543) in the legacy path are properly guarded, maintaining consistency with the primary path.
Also applies to: 543-548
811ad62 to
5fd3e64
Compare
5fd3e64 to
f4bca6b
Compare
Changes
openai-agentsinstrumentation but unused; use it to control content logging in a way similar to theopenaipackageshould_send_promptsimplementations in the two packages consistent (they had different behaviours)_is_truthyto both the env var and the trace context override variable for consistency and maximum compatibility.feat(instrumentation): ...orfix(instrumentation): ....Important
Enable content tracing in
openai-agentsby usingshould_send_prompts()and ensure consistency withopenaipackage.should_send_prompts()in_hooks.pyto control content logging for OpenAI Agents._is_truthyto both environment variable and trace context override inshould_send_prompts()for consistency.should_send_prompts()behavior inopenai_agents/utils.pyandopenai/utils.py.This description was created by
for 77fa6f4. You can customize this summary. It will automatically update as commits are pushed.
Summary by CodeRabbit
Bug Fixes
Improvements
Documentation
✏️ Tip: You can customize this high-level summary in your review settings.