Skip to content

Conversation

@duanyutong
Copy link

@duanyutong duanyutong commented Dec 2, 2025

Changes

  • should_send_prompts is defined in openai-agents instrumentation but unused; use it to control content logging in a way similar to the openai package
  • make should_send_prompts implementations in the two packages consistent (they had different behaviours)
  • apply _is_truthy to both the env var and the trace context override variable for consistency and maximum compatibility.
  • I have added tests that cover my changes.
  • If adding a new instrumentation or changing an existing one, I've added screenshots from some observability platform showing the change.
  • PR name follows conventional commits format: feat(instrumentation): ... or fix(instrumentation): ....
  • (If applicable) I have updated the documentation accordingly.

Important

Enable content tracing in openai-agents by using should_send_prompts() and ensure consistency with openai package.

  • Behavior:
    • Use should_send_prompts() in _hooks.py to control content logging for OpenAI Agents.
    • Apply _is_truthy to both environment variable and trace context override in should_send_prompts() for consistency.
  • Consistency:
    • Align should_send_prompts() behavior in openai_agents/utils.py and openai/utils.py.

This description was created by Ellipsis for 77fa6f4. You can customize this summary. It will automatically update as commits are pushed.

Summary by CodeRabbit

  • Bug Fixes

    • More reliable control over when prompt/content data is emitted during tracing.
  • Improvements

    • Centralized gating for content tracing for consistent behavior across integrations.
    • More predictable evaluation of environment and override settings that control content-tracing.
  • Documentation

    • Added pre-test setup instructions to the contributing/Getting Started guidance.

✏️ Tip: You can customize this high-level summary in your review settings.

@CLAassistant
Copy link

CLAassistant commented Dec 2, 2025

CLA assistant check
All committers have signed the CLA.

@coderabbitai
Copy link

coderabbitai bot commented Dec 2, 2025

Warning

Rate limit exceeded

@duanyutong has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 26 minutes and 45 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 8f64e4e and f4bca6b.

📒 Files selected for processing (5)
  • .gitignore (1 hunks)
  • CONTRIBUTING.md (1 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (9 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1 hunks)

Walkthrough

Adds a runtime gate for emitting prompt/content attributes: should_send_prompts() is exported and used to condition all prompt/content tracing across OpenTelemetry OpenAI instrumentations; utility modules gain explicit bool returns, a truthiness helper, and a named env-constant.

Changes

Cohort / File(s) Summary
Agent hooks
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
Import should_send_prompts and Final; introduce should_trace_content: Final[bool] = should_send_prompts() and gate all prompt/content attribute emissions and response/text handling on that flag.
Agent utils
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py
Add _TRACELOOP_TRACE_CONTENT constant; update should_send_prompts() signature to -> bool, add docstring, and use the constant for env lookup with truthiness evaluation.
OpenAI utils
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
Add _is_truthy(value) helper; refactor should_send_prompts() -> bool to use _is_truthy, read _TRACELOOP_TRACE_CONTENT env var, and consult context override.
Documentation
README.md
Insert pre-test setup instructions under Contributing (npm ci, nx install/run commands) for package test preparation.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • Inspect packages/.../_hooks.py to ensure every content/prompt emission path is correctly gated.
  • Verify _is_truthy() semantics and edge cases in packages/.../openai/utils.py.
  • Confirm consistent use of _TRACELOOP_TRACE_CONTENT and updated should_send_prompts() signatures.

Suggested reviewers

  • nirga

Poem

🐰 I nudge the traces, soft and bright,

A little gate to guard the light.
I check the truth, keep secrets mild,
Prompts tucked safe — the logs are smiled.
Hopping code with tidy might.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 50.00% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The title 'fix(openai-agents): apply content tracing flag to content' directly summarizes the main change: applying the content tracing flag functionality to control content logging in the openai-agents instrumentation.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Looks good to me! 👍

Reviewed everything up to 77fa6f4 in 1 minute and 43 seconds. Click for details.
  • Reviewed 134 lines of code in 3 files
  • Skipped 0 files when reviewing.
  • Skipped posting 8 draft comments. View those below.
  • Modify your settings and rules to customize what types of comments Ellipsis leaves. And don't forget to react with 👍 or 👎 to teach Ellipsis.
1. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:298
  • Draft comment:
    Guarding content logging by checking should_send_prompts() ensures that sensitive prompt data is only recorded when permitted, aligning this behavior with the OpenAI instrumentation.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative and does not provide any actionable feedback or suggestions for improvement. It simply states that the current implementation aligns with OpenAI instrumentation, which is not necessary for the PR author to know.
2. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:399
  • Draft comment:
    The added check with should_send_prompts() before processing 'output.content' prevents logging of content when tracing is disabled, maintaining consistency with other parts of the code.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, as it only praises the addition of a check without providing any actionable feedback or suggestions for improvement. It does not align with the guidelines for useful comments.
3. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:430
  • Draft comment:
    Adding the should_send_prompts() condition on the branch handling direct text output safeguards against unintended logging when content tracing is disabled.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative and does not provide any actionable feedback or suggestions for improvement. It simply states that a change is good, which violates the rule against making purely informative comments.
4. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:466
  • Draft comment:
    Guarding the legacy prompt extraction with should_send_prompts() ensures that prompt data from input is only logged when allowed, enhancing consistency across the tracing logic.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, as it praises the use of should_send_prompts() without suggesting any changes or improvements. It doesn't ask for confirmation or suggest any specific action.
5. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:508
  • Draft comment:
    Applying the should_send_prompts() check here before processing output content ensures that sensitive response content is only recorded when appropriate.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative and does not provide any actionable feedback or suggestions for improvement. It simply acknowledges a good practice without offering any specific advice or raising any concerns.
6. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py:539
  • Draft comment:
    The should_send_prompts() condition now also guards the branch for direct text outputs, promoting consistent control over sensitive data logging.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, as it praises a change without providing any actionable feedback or suggestions. It does not align with the rules for useful comments.
7. packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py:23
  • Draft comment:
    Refactoring the should_send_prompts() function with explicit type hints and applying _is_truthy uniformly improves clarity and consistency. This change ensures broader support for various truthy values.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, praising the refactoring without providing any actionable feedback or suggestions. It doesn't ask for confirmation or suggest improvements, so it doesn't align with the rules for useful comments.
8. packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py:181
  • Draft comment:
    Aligning the should_send_prompts() implementation to use _is_truthy for both the environment variable and the context override brings consistency with the openai-agents instrumentation.
  • Reason this comment was not posted:
    Comment did not seem useful. Confidence is useful = 0% <= threshold 50% This comment is purely informative, explaining the reason behind a change without suggesting any action or asking for confirmation. It doesn't align with the rules for useful comments.

Workflow ID: wflow_nroeXO8JV5fdsBQi

You can customize Ellipsis by changing your verbosity settings, reacting with 👍 or 👎, replying to comments, or adding code review rules.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (1)

399-443: Consider refactoring duplicated response processing logic.

There is significant code duplication between lines 399-443 and 508-552. Both blocks handle response output processing (extracting content from ResponseOutputMessage, tool calls, and direct text content) with nearly identical logic. Consider extracting this into a helper method to improve maintainability.

Note: This is pre-existing duplication, not introduced by this PR.

Example refactoring approach:

def _process_response_output(self, otel_span, response, prefix_attr):
    """Extract and set response output attributes."""
    if not (hasattr(response, 'output') and response.output):
        return
        
    for i, output in enumerate(response.output):
        if should_send_prompts() and hasattr(output, 'content') and output.content:
            # Text message with content array (ResponseOutputMessage)
            content_text = "".join(
                content_item.text for content_item in output.content 
                if hasattr(content_item, 'text')
            )
            if content_text:
                otel_span.set_attribute(f"{prefix_attr}.{i}.content", content_text)
                otel_span.set_attribute(f"{prefix_attr}.{i}.role", 
                    getattr(output, 'role', 'assistant'))
        # ... rest of logic

Also applies to: 508-552

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between c8c1553 and 77fa6f4.

📒 Files selected for processing (3)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (7 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2 hunks)
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules

Files:

  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🧬 Code graph analysis (3)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)
  • dont_throw (52-78)
  • should_send_prompts (23-30)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)
  • should_send_prompts (181-188)
  • _is_truthy (177-178)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (14)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)
  • _is_truthy (19-20)
  • should_send_prompts (23-30)
packages/opentelemetry-instrumentation-langchain/opentelemetry/instrumentation/langchain/utils.py (1)
  • should_send_prompts (49-52)
packages/opentelemetry-instrumentation-llamaindex/opentelemetry/instrumentation/llamaindex/utils.py (1)
  • should_send_prompts (32-35)
packages/opentelemetry-instrumentation-anthropic/opentelemetry/instrumentation/anthropic/utils.py (1)
  • should_send_prompts (28-31)
packages/opentelemetry-instrumentation-groq/opentelemetry/instrumentation/groq/utils.py (1)
  • should_send_prompts (25-28)
packages/opentelemetry-instrumentation-cohere/opentelemetry/instrumentation/cohere/utils.py (1)
  • should_send_prompts (38-41)
packages/opentelemetry-instrumentation-vertexai/opentelemetry/instrumentation/vertexai/utils.py (1)
  • should_send_prompts (11-14)
packages/opentelemetry-instrumentation-bedrock/opentelemetry/instrumentation/bedrock/utils.py (1)
  • should_send_prompts (36-39)
packages/opentelemetry-instrumentation-watsonx/opentelemetry/instrumentation/watsonx/utils.py (1)
  • should_send_prompts (14-17)
packages/opentelemetry-instrumentation-ollama/opentelemetry/instrumentation/ollama/utils.py (1)
  • should_send_prompts (11-14)
packages/opentelemetry-instrumentation-mistralai/opentelemetry/instrumentation/mistralai/utils.py (1)
  • should_send_prompts (11-14)
packages/opentelemetry-instrumentation-replicate/opentelemetry/instrumentation/replicate/utils.py (1)
  • should_send_prompts (11-14)
packages/opentelemetry-instrumentation-transformers/opentelemetry/instrumentation/transformers/utils.py (1)
  • should_send_prompts (11-14)
packages/opentelemetry-instrumentation-haystack/opentelemetry/instrumentation/haystack/utils.py (1)
  • should_send_prompts (21-24)
🔇 Additional comments (6)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)

18-18: LGTM: Private constant follows naming conventions.

The constant is appropriately prefixed with _ to indicate internal use.


181-188: Verify consistency with other instrumentation packages before merging.

This PR updates should_send_prompts() in the openai and openai-agents packages to use _is_truthy() for both environment variable and override checks. Before merging, confirm:

  1. Whether other instrumentation packages (langchain, llamaindex, anthropic, groq, cohere, vertexai, bedrock, watsonx, ollama, mistralai, replicate, transformers, haystack) use a different pattern for the same function
  2. If an inconsistency exists, decide whether to:
    • Update all packages in this PR
    • Create a follow-up issue to standardize across all packages
    • Document why only these two packages are being updated
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)

9-9: LGTM: Consistent with openai package.

The private constant naming is consistent with the openai package implementation.


23-30: LGTM: Implementation consistent with openai package.

The function signature, docstring, and implementation are now consistent with the openai package. The use of _is_truthy() for both environment variable and context override provides uniform truthiness evaluation.

packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (2)

14-14: LGTM: Import enables content tracing control.

The import of should_send_prompts enables the gating mechanism for content tracing as intended by this PR.


301-301: LGTM: Content tracing guards correctly implemented.

The should_send_prompts() guards are correctly placed to control content and prompt attribute emission. The implementation correctly checks the flag before setting content-related attributes across all code paths.

Also applies to: 402-402, 433-433, 469-469, 511-511, 542-542

@duanyutong duanyutong force-pushed the fix-openai-agents-content branch from 77fa6f4 to d034a5c Compare December 3, 2025 17:23
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (1)

301-305: Content emission is correctly gated; consider caching should_send_prompts() per span.

The new checks around prompt and completion content ensure that only when should_send_prompts() evaluates truthy do you emit potentially sensitive text, while still recording non-content metadata (roles, usage, etc.) unconditionally. This aligns with the stated PR goal of applying the content tracing flag across Agents paths.

For a small perf/clarity win, you could compute the flag once in on_span_end and reuse it instead of calling should_send_prompts() multiple times inside loops:

@@
-        if span in self._otel_spans:
-            otel_span = self._otel_spans[span]
-            span_data = getattr(span, 'span_data', None)
-            if span_data and (
+        if span in self._otel_spans:
+            otel_span = self._otel_spans[span]
+            span_data = getattr(span, 'span_data', None)
+            send_prompts = should_send_prompts()
+            if span_data and (
@@
-                        # Set content attribute
-                        if should_send_prompts() and content is not None:
+                        # Set content attribute
+                        if send_prompts and content is not None:
@@
-                            # Handle different output types
-                            if should_send_prompts() and hasattr(output, 'content') and output.content:
+                            # Handle different output types
+                            if send_prompts and hasattr(output, 'content') and output.content:
@@
-                            elif should_send_prompts() and hasattr(output, 'text'):
+                            elif send_prompts and hasattr(output, 'text'):
@@
-                input_data = getattr(span_data, 'input', [])
-                if should_send_prompts() and input_data:
+                input_data = getattr(span_data, 'input', [])
+                if send_prompts and input_data:
@@
-                            # Handle different output types
-                            if should_send_prompts() and hasattr(output, 'content') and output.content:
+                            # Handle different output types
+                            if send_prompts and hasattr(output, 'content') and output.content:
@@
-                            elif should_send_prompts() and hasattr(output, 'text'):
+                            elif send_prompts and hasattr(output, 'text'):

This keeps behavior identical while avoiding repeated env/context lookups inside the hot loops.

Also applies to: 402-415, 433-439, 469-484, 511-524, 542-548

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 77fa6f4 and d034a5c.

📒 Files selected for processing (4)
  • README.md (1 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (7 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules

Files:

  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🧠 Learnings (1)
📚 Learning: 2025-12-02T21:09:48.690Z
Learnt from: duanyutong
Repo: traceloop/openllmetry PR: 3487
File: packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py:177-178
Timestamp: 2025-12-02T21:09:48.690Z
Learning: The opentelemetry-instrumentation-openai and opentelemetry-instrumentation-openai-agents packages must remain independent and not share code, so code duplication between them is acceptable.

Applied to files:

  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🧬 Code graph analysis (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (2)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (1)
  • should_send_prompts (23-30)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1)
  • should_send_prompts (181-188)
🔇 Additional comments (3)
README.md (1)

183-192: Verify no duplicate test setup instructions elsewhere in the README.

The summary indicates this pre-test setup instruction block is inserted in two locations within the Contributing section, duplicating the guidance. However, only one instance is visible in the provided code at lines 183-192. This discrepancy should be verified to ensure no unintended duplication exists elsewhere in the file.

packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1)

177-188: Content-tracing flag logic looks solid; confirm intended OR semantics and default.

_is_truthy() gives robust normalization for both the env var and the trace-context override, and should_send_prompts() now behaves consistently with the agents-side helper. One subtle point: with env_setting defaulting to "true" and using OR, the context override can enable tracing when the env is falsy but cannot disable it when the env (or default) is truthy. Please confirm this precedence and default-on behavior is intentional for your privacy/config story.

packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (1)

14-14: Importing should_send_prompts into hooks is appropriate.

Pulling in should_send_prompts from .utils cleanly wires the hooks into the shared content-tracing decision without expanding this module’s responsibilities.

@duanyutong duanyutong force-pushed the fix-openai-agents-content branch 2 times, most recently from 32e4fc9 to 8f64e4e Compare December 3, 2025 18:52
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (1)

19-20: Code duplication is acceptable here.

The _is_truthy function is duplicated from the openai package, which is acceptable since these packages must remain independent. Based on learnings, no shared utility extraction is needed.

🧹 Nitpick comments (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1)

18-18: Inconsistent constant naming between packages.

This file uses TRACELOOP_TRACE_CONTENT (public), while the openai_agents package uses _TRACELOOP_TRACE_CONTENT (private, with underscore prefix). The PR description mentions renaming this constant for consistency, but it appears the rename wasn't applied here.

-TRACELOOP_TRACE_CONTENT = "TRACELOOP_TRACE_CONTENT"
+_TRACELOOP_TRACE_CONTENT = "TRACELOOP_TRACE_CONTENT"

Then update the reference on line 186 accordingly.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 32e4fc9 and 8f64e4e.

📒 Files selected for processing (4)
  • README.md (1 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (9 hunks)
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2 hunks)
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
  • README.md
🧰 Additional context used
📓 Path-based instructions (1)
**/*.py

📄 CodeRabbit inference engine (CLAUDE.md)

**/*.py: Store API keys only in environment variables/secure vaults; never hardcode secrets in code
Use Flake8 for code linting and adhere to its rules

Files:

  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py
  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
  • packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py
🧠 Learnings (1)
📚 Learning: 2025-12-02T21:09:48.690Z
Learnt from: duanyutong
Repo: traceloop/openllmetry PR: 3487
File: packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py:177-178
Timestamp: 2025-12-02T21:09:48.690Z
Learning: The opentelemetry-instrumentation-openai and opentelemetry-instrumentation-openai-agents packages must remain independent and not share code, so code duplication between them is acceptable.

Applied to files:

  • packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py
🧬 Code graph analysis (2)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (2)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)
  • dont_throw (52-78)
  • should_send_prompts (23-30)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)
  • dont_throw (132-160)
  • should_send_prompts (181-188)
packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (1)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)
  • should_send_prompts (181-188)
  • _is_truthy (177-178)
🔇 Additional comments (12)
packages/opentelemetry-instrumentation-openai/opentelemetry/instrumentation/openai/utils.py (2)

186-188: Potential logic issue: override cannot disable content tracing.

The or operator means content will be traced if either condition is truthy. Since env_setting defaults to "true", setting the override to False won't disable tracing. If the intent is that the override should be able to disable tracing when explicitly set to false, consider:

-    return _is_truthy(env_setting) or _is_truthy(override)
+    if override is not None:
+        return _is_truthy(override)
+    return _is_truthy(env_setting)

If the current behavior (override can only enable, never disable) is intentional, please add a clarifying comment.


177-178: LGTM!

The _is_truthy helper correctly normalizes various input types (None, bool, string) for truthiness checks.

packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/utils.py (2)

9-10: LGTM!

The constant is appropriately marked as private with the underscore prefix.


28-30: Same logic concern: override cannot disable content tracing.

Same issue as in the openai package—the or operator means the override can only enable tracing, not disable it when the environment variable defaults to "true". If both packages should behave identically (which they do now), consider whether this is the intended behavior.

packages/opentelemetry-instrumentation-openai-agents/opentelemetry/instrumentation/openai_agents/_hooks.py (8)

236-236: LGTM!

Good pattern: calling should_send_prompts() once and storing in a Final variable ensures consistent behavior throughout the span lifecycle and avoids repeated function calls.


345-349: Consider gating tool call arguments as content.

Tool call arguments may contain sensitive user data passed to functions. These are emitted without the should_trace_content guard, unlike prompt and response content. If tool arguments should be treated as content for privacy purposes, they should also be gated:

-                                if tool_call.get('arguments'):
+                                if should_trace_content and tool_call.get('arguments'):
                                     args = tool_call['arguments']
                                     if not isinstance(args, str):
                                         args = json.dumps(args)
                                     otel_span.set_attribute(f"{prefix}.tool_calls.{j}.arguments", args)

The same consideration applies to lines 420-432 and 528-541 where tool call attributes are set.


3-3: LGTM!

The Final import supports the type annotation on line 236, correctly documenting that should_trace_content won't be reassigned.


14-14: LGTM!

Import of should_send_prompts from the local utils module enables the content tracing gate functionality.


302-305: Content gating correctly applied to prompt content.

The guard prevents emission of potentially sensitive prompt content when content tracing is disabled.


403-415: Content gating correctly applied to response outputs.

Both structured content (line 403) and direct text output (line 434) are properly guarded.

Also applies to: 434-439


470-484: Content gating correctly applied to legacy input path.

The legacy fallback path for input content is properly gated with should_trace_content.


512-524: Content gating correctly applied to legacy response path.

Both structured content (line 512) and direct text output (line 543) in the legacy path are properly guarded, maintaining consistency with the primary path.

Also applies to: 543-548

@duanyutong duanyutong force-pushed the fix-openai-agents-content branch 2 times, most recently from 811ad62 to 5fd3e64 Compare December 4, 2025 16:59
@duanyutong duanyutong force-pushed the fix-openai-agents-content branch from 5fd3e64 to f4bca6b Compare December 4, 2025 17:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants