Skip to content

Conversation

@ejfine
Copy link
Contributor

@ejfine ejfine commented Dec 15, 2025

Pull in upstream template changes for VCRpy config

Summary by CodeRabbit

  • New Features

    • Added Python asyncio configuration template option for async code support.
    • Added VCRpy test fixture support for recording and replaying HTTP requests in tests.
  • Improvements

    • Enhanced error tracking for background tasks in async tests.
    • Updated development tools, testing frameworks, and CI/CD actions to newer versions.
    • Improved development container setup and CI/CD workflow robustness with better error reporting.

✏️ Tip: You can customize this high-level summary in your review settings.

@ejfine ejfine self-assigned this Dec 15, 2025
@coderabbitai
Copy link

coderabbitai bot commented Dec 15, 2025

Warning

Rate limit exceeded

@ejfine has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 0 minutes and 33 seconds before requesting another review.

⌛ How to resolve this issue?

After the wait time has elapsed, a review can be triggered using the @coderabbitai review command as a PR comment. Alternatively, push new commits to this PR.

We recommend that you space out your commits to avoid hitting the rate limit.

🚦 How do rate limits work?

CodeRabbit enforces hourly rate limits for each developer per organization.

Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout.

Please see our FAQ for further information.

📥 Commits

Reviewing files that changed from the base of the PR and between 3d469a5 and 4d3b5bc.

📒 Files selected for processing (1)
  • template/tests/unit/conftest.py.jinja (1 hunks)

Walkthrough

This PR introduces asyncio and VCRpy test fixture support, updates numerous dependency and tool versions across the codebase, refactors Windows host helper scripts to use rsync, updates GitHub Actions workflows with enhanced error handling and newer action versions, modifies pre-commit configurations to reflect new generated-code paths, and adjusts configuration exclusions in pyright and gitignore patterns.

Changes

Cohort / File(s) Summary
Test Fixture Modules (New)
copier_template_resources/python_asyncio/asyncio_fixtures.py, copier_template_resources/python_asyncio/background_tasks.py, copier_template_resources/vcrpy_fixtures.py
New pytest fixture modules added to support asyncio background task monitoring and VCRpy HTTP cassette recording configuration. Background task tracking integrates callback-based exception handling with WeakSet and deque for memory efficiency.
Dependency & Version Updates
extensions/context.py, pyproject.toml, .devcontainer/install-ci-tooling.py
Bumped versions for uv (0.9.11→0.9.17), pnpm (10.23.0→10.25.0), pytest (9.0.1→9.0.2), and numerous other tools (sphinx, strawberry-graphql, fastapi, uvicorn, node, nuxt, tailwind, etc.). Some constraints converted to minimum-range (≥).
GitHub Actions Workflow Updates
.github/workflows/ci.yaml, .github/workflows/get-values.yaml, .github/workflows/pre-commit.yaml, .github/workflows/tag-on-merge.yaml, .github/actions/install_deps/action.yml, .github/actions/update-devcontainer-hash/action.yml, .github/reusable_workflows/build-docker-image.yaml, template/.github/workflows/*, template/.github/actions/*
Updated checkout (v5.0.0→v6.0.1), cache (v4.2.4→v4.3.0), setup-python/node (v6.0.0→v6.1.0), and AWS credentials actions (v5.1.0→v5.1.1). Added ref input to checkout for branch selection. Enhanced pre-commit job with exit code capture, git diff output, and explicit dependency lists. Added pr-short-num workflow output.
Copier Configuration
.copier-answers.yml, copier.yml, tests/copier_data/data*.yaml
Added two new boolean questions: configure_vcrpy and configure_python_asyncio (both default no). Updated commit reference v0.0.85→v0.0.90. Added default python_version "3.12.7". Updated test data fixtures with new configuration flags.
Infrastructure Scripts
.devcontainer/windows-host-helper.sh, template/.devcontainer/windows-host-helper.sh
Replaced multi-step manual file-move logic with single rsync invocation; added sudo-based removal, mkdir, and chown; excludes node_modules, .pnpm-store, .venv; simplified temporary directory handling.
Devcontainer Configuration
.devcontainer/devcontainer.json, template/.devcontainer/devcontainer.json.jinja
Removed AWS CLI feature; updated coderabbit extension 0.16.0→0.16.1; updated context hash; made AWS feature and extension conditionally included only when not a child of copier base template.
Pre-commit & Linting Configuration
.pre-commit-config.yaml, template/.pre-commit-config.yaml, .github/workflows/hash_git_files.py, template/.github/workflows/hash_git_files.py
Updated multiple pre-commit hook revisions (typos, prettier, ruff, pylint, pyright, etc.). Changed path exclusion patterns from .*/graphql_codegen/.* to .*/generated/graphql/.*. Narrowed exception handling in compute_adler32 to IsADirectoryError only; changed loop to reversed(range(...)).
Configuration & Metadata
pyrightconfig.json, template/pyrightconfig.json, .gitignore, template/.gitignore
Replaced graphql_codegen exclude with generated/graphql; added copier_template_resources exclude. Changed macOS ignore patterns from *.DS_Store, .DS_Store to recursive **/.DS_Store.
Template Documentation & Dependencies
template/README.md.jinja, template/pyproject.toml.jinja, template/tests/unit/conftest.py.jinja
Reordered Python Versions badge with conditional Codecov rendering. Restored pytest-randomly dependency; added pytest-recording and vcrpy under new configure_vcrpy condition. Added pytest_configure hook with VCRpy cassette logger setup and conditional imports.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

  • New asyncio/VCRpy fixture modules: Verify background task exception tracking, WeakSet/deque memory handling, and VCR configuration logic (filters, header stripping, cassette matching).
  • Windows rsync migration: Confirm rsync exclusion patterns (node_modules, .pnpm-store, .venv), sudo command necessity, and directory permission handling via chown.
  • Workflow control flow: Review pre-commit exit code capture, git diff output, and dependency list syntax conversions; verify ref input plumbing in checkout action.
  • Path exclusion changes: Validate that graphql_codegen→generated/graphql pattern change aligns with actual codebase structure; confirm copier_template_resources exclusion impacts.

Possibly related PRs

  • PR #71: Modifies CI/tooling and dependency version configuration in install-ci-tooling.py, extensions/context.py, and workflow action versions.
  • PR #70: Updates dependency/version configuration values in extensions/context.py and multiple tool/library versions.
  • PR #72: Modifies .github/workflows/get-values.yaml GitHub context display step alongside the pr-short-num output addition.

Pre-merge checks

❌ Failed checks (2 warnings)
Check name Status Explanation Resolution
Description check ⚠️ Warning The description 'Pull in upstream template changes for VCRpy config' is brief but does not follow the provided template structure with required sections. Expand the description to include template sections: why this change is necessary, how it addresses the issue, side effects, and testing approach.
Docstring Coverage ⚠️ Warning Docstring coverage is 41.67% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title 'Copier update: VCRpy' accurately reflects the primary change: adding VCRpy configuration to the Copier template.

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 13

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
.github/actions/install_deps/action.yml (1)

59-79: Version bumps for setup-python and setup-node contain breaking changes that require attention

  • actions/setup-python@v6.1.0: Node 20→24 upgrade requires GitHub Actions Runner v2.327.1 or later. urllib3 bumped to 2.x with documented breaking changes affecting urllib3 internals.
  • actions/setup-node@v6.1.0: Removed always-auth input/handling; removed automatic caching support for non-npm package managers (caching now requires explicit configuration).
  • aws-actions/configure-aws-credentials@v5.1.1: Minor release with dependency updates, no breaking changes noted.

Verify that all runners meet the minimum version requirement and remove any always-auth configurations before merging.

template/.github/actions/install_deps/action.yml (1)

59-79: Update workflows to accommodate breaking changes in setup-python@v6.1.0 and setup-node@v6.1.0

  • For setup-python@v6.1.0: require GitHub runner v2.327.1 or later due to Node 24 upgrade; verify urllib3 v2 compatibility.
  • For setup-node@v6.1.0: remove any always-auth usage and adjust cache inputs (automatic caching now only for npm).
  • aws-actions/configure-aws-credentials@v5.1.1 introduces no breaking changes.
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: ASSERTIVE

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3ebc906 and 3d469a5.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (38)
  • .copier-answers.yml (2 hunks)
  • .devcontainer/devcontainer.json (2 hunks)
  • .devcontainer/install-ci-tooling.py (1 hunks)
  • .devcontainer/windows-host-helper.sh (1 hunks)
  • .github/actions/install_deps/action.yml (2 hunks)
  • .github/actions/update-devcontainer-hash/action.yml (1 hunks)
  • .github/reusable_workflows/build-docker-image.yaml (1 hunks)
  • .github/workflows/ci.yaml (6 hunks)
  • .github/workflows/get-values.yaml (2 hunks)
  • .github/workflows/hash_git_files.py (1 hunks)
  • .github/workflows/pre-commit.yaml (3 hunks)
  • .github/workflows/tag-on-merge.yaml (1 hunks)
  • .gitignore (1 hunks)
  • .pre-commit-config.yaml (6 hunks)
  • copier.yml (1 hunks)
  • copier_template_resources/python_asyncio/asyncio_fixtures.py (1 hunks)
  • copier_template_resources/python_asyncio/background_tasks.py (1 hunks)
  • copier_template_resources/vcrpy_fixtures.py (1 hunks)
  • extensions/context.py (5 hunks)
  • pyproject.toml (1 hunks)
  • pyrightconfig.json (1 hunks)
  • template/.devcontainer/devcontainer.json.jinja (2 hunks)
  • template/.devcontainer/windows-host-helper.sh (1 hunks)
  • template/.github/actions/install_deps/action.yml (2 hunks)
  • template/.github/actions/update-devcontainer-hash/action.yml (1 hunks)
  • template/.github/workflows/get-values.yaml (2 hunks)
  • template/.github/workflows/hash_git_files.py (1 hunks)
  • template/.github/workflows/pre-commit.yaml (3 hunks)
  • template/.gitignore (1 hunks)
  • template/.pre-commit-config.yaml (6 hunks)
  • template/README.md.jinja (1 hunks)
  • template/pyproject.toml.jinja (1 hunks)
  • template/pyrightconfig.json (1 hunks)
  • template/tests/unit/conftest.py.jinja (1 hunks)
  • template/tests/unit/{% if configure_vcrpy %}vcrpy_fixtures.py{% endif %} (1 hunks)
  • tests/copier_data/data1.yaml (1 hunks)
  • tests/copier_data/data2.yaml (1 hunks)
  • tests/copier_data/data3.yaml (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
.github/workflows/hash_git_files.py (1)
template/.github/workflows/hash_git_files.py (1)
  • find_devcontainer_hash_line (75-88)
🪛 actionlint (1.7.9)
.github/workflows/pre-commit.yaml

72-72: shellcheck reported issue in this script: SC2086:info:6:8: Double quote to prevent globbing and word splitting

(shellcheck)

🪛 YAMLlint (1.37.1)
copier.yml

[warning] 43-43: truthy value should be one of [false, true]

(truthy)


[warning] 47-47: truthy value should be one of [false, true]

(truthy)

🔇 Additional comments (33)
.github/reusable_workflows/build-docker-image.yaml (1)

68-76: Version bumps are consistent and aligned across the repo

The upgrades to actions/checkout@v6.0.1 and aws-actions/configure-aws-credentials@v5.1.1 are used consistently throughout the repo, and the surrounding workflow logic and inputs remain unchanged.

template/.gitignore (1)

79-80: Recursive .DS_Store ignore pattern is appropriate

Switching to **/.DS_Store gives consistent ignore coverage for macOS metadata files at any depth and is aligned with common practice.

.gitignore (1)

79-80: Consistent .DS_Store ignore across repo and template

Using **/.DS_Store here to match the template keeps behavior consistent and correctly ignores macOS metadata files anywhere in the tree.

template/pyrightconfig.json (1)

13-18: Updated pyright excludes align with generated/template resources

Excluding **/generated/graphql, **/venv, and **/copier_template_resources is consistent with keeping pyright focused on non-generated code and avoiding noise from template resource modules, especially given these are support files rather than part of the user-facing template.

template/.github/workflows/pre-commit.yaml (2)

61-79: Improved pre-commit diagnostics look correct

Capturing the pre-commit exit code, printing a clear failure message, and showing git diff before exiting with the same code gives better feedback without altering success behavior.


34-44: Checkout action version and configuration are appropriate

actions/checkout@v6.0.1 is the current recommended version. The use of persist-credentials: false aligns with best practices for pre-commit workflows, which only require read access to the repository. v6 improved credential security by storing tokens in $RUNNER_TEMP rather than .git/config, making this configuration safe and idiomatic.

pyrightconfig.json (1)

13-18: Root pyright excludes correctly mirror template configuration

Using **/generated/graphql, **/venv, and **/copier_template_resources here keeps the root pyright behavior in sync with the template and avoids type-checking generated/template-resource code.

template/.github/actions/update-devcontainer-hash/action.yml (1)

29-35: Explicit ref with updated checkout action looks good

Using actions/checkout@v6.0.1 and wiring ref: ${{ inputs.branch }} makes the composite action deterministic about which branch it updates. The version is stable (released Dec 2, 2025), and both ref and persist-credentials remain fully supported with no breaking changes to their basic usage.

pyproject.toml (1)

7-14: Pytest version bump to 9.0.2 is safe with no compatibility concerns

The bump to pytest>=9.0.2 is straightforward. Both pytest-cov>=7.0.0 and pytest-randomly>=4.0.1 are confirmed compatible; pytest 9.0.2 includes compatibility fixes from earlier 9.0.x issues and introduces no new breaking changes affecting plugin usage.

.devcontainer/devcontainer.json (1)

19-20: Devcontainer extension bump and regenerated context hash look correct

The coderabbit VS Code extension version bump and the updated devcontainer context hash comment are consistent with the hash_git_files.py logic and don’t change runtime behavior of the container itself. Assuming the extension version exists and the hash was regenerated via the hook, this is good to go.

You may want to run the devcontainer context hash pre-commit hook once locally (or pre-commit run compute-devcontainer-context-hash --all-files) to confirm the hash comment stays stable across runs.

Also applies to: 61-61

template/.github/workflows/hash_git_files.py (1)

57-71: Safer exception handling and more robust hash-line search

Narrowing the catch in compute_adler32 to IsADirectoryError avoids hiding unexpected IO issues while still skipping directory-like entries, which is a solid improvement. Scanning lines in reverse in find_devcontainer_hash_line correctly biases toward the final closing brace, making the hash insertion logic more resilient to nested objects earlier in the file.

Also applies to: 75-88

template/.pre-commit-config.yaml (1)

45-45: Template pre-commit updates stay consistent with root configuration

The template’s hook revisions and the move to .*/generated/graphql/.* in excludes mirror the root .pre-commit-config.yaml, which keeps generated projects aligned with this repo’s tooling. No behavioral regressions are apparent as long as generated GraphQL now lives under generated/graphql.

After updating the template, it’s worth instantiating a sample project (with GraphQL enabled) and running pre-commit run --all-files there to ensure the excludes match the generated layout.

Also applies to: 111-111, 198-198, 252-253, 260-261, 270-271, 277-279, 282-282, 289-291, 305-307

template/README.md.jinja (1)

9-10: Improved conditional rendering for Python and Codecov badges

Scoping the Python Versions badge under if not is_frozen_executable and the Codecov badge under if use_codecov matches the intended semantics: no PyPI-related badges for frozen-only distributions and no Codecov badge unless coverage reporting is configured. The {% raw %}/{% endraw %} juggling remains balanced.

When you next apply the template, check the rendered README for a few combinations (is_frozen_executable true/false, use_codecov true/false) to confirm the expected badges appear.

extensions/context.py (1)

13-116: Version matrix updates and relaxed constraints look consistent, but warrant end-to-end verification

The updated versions for Python tooling (pytest/Sphinx/pydantic/pyinstaller), async/VCR stack (vcrpy, pytest-recording, pytest-asyncio), JS/Node/Nuxt ecosystem, container images, and GitHub Actions all appear internally consistent, and shifting some specs to >=... gives downstream projects more flexibility.

Given how central this context is to generated projects and CI, it’s important to verify:

  • pyproject.toml / template dependencies interpolate these new strings correctly (especially the >=... specs).
  • The new runner labels and action versions (ubuntu-24.04, windows-2025, updated gha_*) are supported in your target GitHub environment.
  • A freshly generated project passes its full CI using these versions.

Consider generating at least one project for each major configuration (sync/async, GraphQL, VCRpy enabled, etc.), then running its CI or at least uv lock, pytest, and the GitHub Actions workflows to confirm all version combinations are valid.

template/tests/unit/{% if configure_vcrpy %}vcrpy_fixtures.py{% endif %} (1)

1-1: VCRpy fixture indirection follows existing template test pattern

Pointing the conditional vcrpy_fixtures.py test file at ../../../copier_template_resources/vcrpy_fixtures.py is consistent with the other template test resources and keeps the shared fixture implementation centralized. The relative path looks correct from template/tests/unit back to the repo root.

.pre-commit-config.yaml (1)

45-45: Bump pre-commit hook revisions and adjust GraphQL excludes
All hook revisions (typos, mirrors-prettier, check-jsonschema, ruff, pylint) have been updated, and exclusion patterns for generated GraphQL files now use .*/generated/graphql/.*. Run pre-commit run --all-files to confirm the new revisions and patterns behave as expected.
Applies to .pre-commit-config.yaml (lines 45, 111, 198, 252–253, 260–261, 270–271, 277–279, 282, 289–291, 305–307)

.github/actions/update-devcontainer-hash/action.yml (1)

29-35: Checkout version bump and explicit ref look correct

Using actions/checkout@v6.0.1 and wiring ref: ${{ inputs.branch }} is consistent with the action’s branch input and the later git push origin HEAD:${{ inputs.branch }} usage. This should ensure the intended branch tip is updated.

tests/copier_data/data1.yaml (1)

10-11: New VCRpy/asyncio flags aligned with template usage

configure_vcrpy: true and configure_python_asyncio: true cleanly exercise the “both enabled” path and match the naming used elsewhere in the template.

tests/copier_data/data3.yaml (1)

10-11: Mixed VCRpy/asyncio configuration looks intentional

This false/true combination gives coverage for “asyncio without VCRpy” and keeps key names consistent with other copier test data files.

.github/workflows/tag-on-merge.yaml (1)

17-21: Tag workflow checkout bump is straightforward

Upgrading to actions/checkout@v6.0.1 with the existing with parameters (merge commit SHA, full history, no credentials) preserves behavior while aligning with newer actions.

.devcontainer/install-ci-tooling.py (1)

10-11: UV/PNPM version pin bumps are isolated and safe

Updating UV_VERSION to "0.9.17" and PNPM_VERSION to "10.25.0" keeps the installer logic unchanged and just refreshes tool versions; behavior should remain the same assuming those versions are available upstream.

tests/copier_data/data2.yaml (1)

10-11: Baseline VCRpy/asyncio-disabled config looks good

These flags give a clear “both disabled” test scenario and are consistent with the other copier data variants.

.github/workflows/pre-commit.yaml (1)

34-46: Pre-commit workflow action upgrades and caching look solid

Using actions/checkout@v6.0.1 in both checkout paths and actions/cache@v4.3.0 for pre-commit hooks is consistent with the rest of the PR. The enhanced pre-commit step with explicit exit-code capture and a diff dump should make failures easier to debug.

Also applies to: 61-70

.copier-answers.yml (1)

2-16: Upstream commit bump and new template feature hints are consistent

Tracking _commit: v0.0.90 plus adding template_might_want_to_use_python_asyncio and template_might_want_to_use_vcrpy aligns this answers file with the new template capabilities and the copier test data you added.

.github/workflows/hash_git_files.py (1)

68-70: Exception narrowing and reverse search logic look solid

Catching only IsADirectoryError keeps real file IO issues visible while handling the Windows/symlink edge case, and the reversed(range(len(lines))) loop is clear and equivalent to the prior behavior for locating the closing brace and hash line.

Also applies to: 77-87

template/pyproject.toml.jinja (1)

31-35: Confirm version specifiers for pytest-recording and vcrpy

The conditional wiring around configure_vcrpy, create_docs, and is_frozen_executable looks good, but please confirm how the version variables are defined:

  • If pytest_recording_version / vcrpy_version are bare versions like 1.2.3, you probably want:
-    "pytest-recording{% endraw %}{{ pytest_recording_version }}{% raw %}",
-    "vcrpy{% endraw %}{{ vcrpy_version }}{% raw %}",
+    "pytest-recording>={% endraw %}{{ pytest_recording_version }}{% raw %}",
+    "vcrpy>={% endraw %}{{ vcrpy_version }}{% raw %}",
  • If they already include the comparison operator (e.g. ==1.2.3), the current template is fine.
template/.devcontainer/devcontainer.json.jinja (1)

5-10: Devcontainer AWS feature gating and extension bumps look consistent

Conditionally including the AWS CLI devcontainer feature and AWS Toolkit extension only when not a child of the base template is a good way to avoid duplication, and the coderabbit-vscode version bump is straightforward. Just ensure the new feature and extension versions match what your devcontainer base images and VS Code environment expect.

Also applies to: 28-33

.github/workflows/ci.yaml (2)

23-24: Job dependencies and Python version wiring are coherent

Chaining get-values → pre-commit → lint-matrix → required-check via needs makes the workflow ordering explicit, and passing python-version: 3.12.7 into the reusable pre-commit workflow keeps it aligned with the lint matrix. Just be aware that required-check will still fail if lint-matrix ends in a non-success state (including skipped), which appears to be the current intended behavior.

Also applies to: 29-30, 33-35, 146-148


59-59: Action bumps and enhanced pre-commit handling look good; confirm compatibility

  • Updating to newer actions/checkout and actions/cache versions plus caching pre-commit hooks is standard and should improve stability/perf.
  • The updated pre-commit step correctly captures a non-zero exit code, prints a diff for debugging, and then exits with the same code, while still allowing success to short-circuit.

Please confirm:

  • The new action versions (lines 59, 116) are supported by your runners/images.
  • The pre-commit reusable workflow/entrypoint (lines 125-135) doesn't itself assume different SKIP semantics or Python versions.
.github/workflows/get-values.yaml (1)

12-14: New pr-short-num output wiring and checkout bump look fine; verify callers handle non‑PR events

The new pr-short-num output correctly forwards steps.find-pr-num.outputs.number, and the checkout bump is straightforward. Just ensure:

  • Any workflows calling this one tolerate pr-short-num being empty when the event is not pull_request/merge_group.
  • No callers assume this output is always set.

You can sanity-check usage by searching for all workflow_call usages of get-values.yaml in .github/workflows and inspecting how pr-short-num is consumed.

Also applies to: 27-29, 38-38

template/.devcontainer/windows-host-helper.sh (1)

46-50: LGTM! Rsync approach is clean and efficient.

The rsync implementation properly handles the merge with appropriate exclusions for volume mounts. The -a flag preserves permissions and timestamps, and the explicit excludes prevent overwriting mounted volumes.

copier_template_resources/vcrpy_fixtures.py (2)

18-21: LGTM! Windows-specific network handling is well-documented.

The comment clearly explains why ALLOWED_HOSTS needs special handling on Windows and acknowledges the trade-off of reduced safety in CI while still maintaining checks on Linux.


39-43: The defensive type casting and assertion are appropriate for this use case.

VCR.py documentation confirms match_on accepts a sequence (list or tuple) of matcher names. Casting to tuple[str, ...] before the concatenation operation is the right approach, and the runtime isinstance check is good defensive programming given that VCR lacks complete type annotations. The approach is sound.

Comment on lines +30 to +33
sudo rm -rf "./$repoName" || true
sudo rm -rf "./$repoName/*.md"
mkdir -p "./$repoName"
sudo chown -R "$(whoami):$(whoami)" "./$repoName" # TODO: see if this alone is enough to fix everything
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Rsync-based sync is a nice simplification; drop redundant rm line

The new flow (fresh clone into a tmp dir, rsync -av into ./$repoName with excludes, then cleanup) is much simpler and safer than the previous manual copy logic. One nit:

  • sudo rm -rf "./$repoName" already removes the entire target directory; the subsequent sudo rm -rf "./$repoName/*.md" is redundant, and the quotes prevent glob expansion anyway. It can be removed to avoid confusion.

Also applies to: 38-50, 52-56

🤖 Prompt for AI Agents
.devcontainer/windows-host-helper.sh around lines 30 to 33: the second removal
command `sudo rm -rf "./$repoName/*.md"` is redundant because `sudo rm -rf
"./$repoName"` already deletes the directory and the quotes prevent glob
expansion; remove that redundant line here and also remove the identical
redundant rm lines in the other ranges (lines 38-50 and 52-56) referenced in the
comment; if you actually intended to only remove markdown files instead of the
whole directory, replace the whole-dir rm with a properly unquoted glob-aware
command or use find to target *.md, but per the review just delete the redundant
`rm -rf "./$repoName/*.md"` occurrences.

Comment on lines 71 to +79
- name: Run pre-commit
run: pre-commit run -a
run: |
pre-commit run -a || PRE_COMMIT_EXIT_CODE=$?
if [ -n "$PRE_COMMIT_EXIT_CODE" ]; then
echo "Pre-commit failed with exit code $PRE_COMMIT_EXIT_CODE"
echo "Showing git diff:"
git --no-pager diff
exit $PRE_COMMIT_EXIT_CODE
fi
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Quote PRE_COMMIT_EXIT_CODE when exiting to satisfy shellcheck

The new failure-handling block is sensible, but shellcheck’s SC2086 warning is likely about unquoted $PRE_COMMIT_EXIT_CODE in the exit command. Quoting it avoids word-splitting/globbing warnings and keeps linters happy without changing behavior.

You can address this with a minimal tweak:

-          pre-commit run -a || PRE_COMMIT_EXIT_CODE=$?
-          if [ -n "$PRE_COMMIT_EXIT_CODE" ]; then
-            echo "Pre-commit failed with exit code $PRE_COMMIT_EXIT_CODE"
+          pre-commit run -a || PRE_COMMIT_EXIT_CODE=$?
+          if [ -n "$PRE_COMMIT_EXIT_CODE" ]; then
+            echo "Pre-commit failed with exit code $PRE_COMMIT_EXIT_CODE"
             echo "Showing git diff:"
             git --no-pager diff
-            exit $PRE_COMMIT_EXIT_CODE
+            exit "$PRE_COMMIT_EXIT_CODE"
           fi
🧰 Tools
🪛 actionlint (1.7.9)

72-72: shellcheck reported issue in this script: SC2086:info:6:8: Double quote to prevent globbing and word splitting

(shellcheck)

🤖 Prompt for AI Agents
In .github/workflows/pre-commit.yaml around lines 71 to 79, the exit command
uses an unquoted shell variable which triggers shellcheck SC2086; update the
exit invocation to quote the variable (exit "$PRE_COMMIT_EXIT_CODE") so the
value is passed safely without word-splitting or globbing while preserving
behavior.

Comment on lines +4 to +5
from backend_api.background_tasks import background_task_exceptions
from backend_api.background_tasks import background_tasks_set
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Description: Verify backend_api.background_tasks module exists and exports required symbols

# Search for the background_tasks module
echo "=== Searching for background_tasks.py ==="
fd -t f "background_tasks.py"

echo -e "\n=== Searching for symbol definitions ==="
# Search for definitions of the required globals
rg -n "background_task_exceptions|background_tasks_set" --type py -C 3

Repository: LabAutomationAndScreening/copier-python-package-template

Length of output: 6049


🏁 Script executed:

#!/bin/bash
# Check if backend_api package exists
echo "=== Searching for backend_api package ==="
fd -t d "backend_api" --max-depth 3

echo -e "\n=== Checking for __init__.py in backend_api ==="
find . -path "*/backend_api/__init__.py"

echo -e "\n=== Checking if asyncio_fixtures.py is in a template ==="
# Look at the directory structure
ls -la copier_template_resources/python_asyncio/

Repository: LabAutomationAndScreening/copier-python-package-template

Length of output: 497


Fix incorrect import paths in asyncio_fixtures.py.

The imports reference backend_api.background_tasks, but this module does not exist in the codebase. The symbols background_task_exceptions and background_tasks_set are defined in copier_template_resources/python_asyncio/background_tasks.py in the same directory. Update the imports to use relative imports:

from .background_tasks import background_task_exceptions
from .background_tasks import background_tasks_set

Alternatively, if this template requires absolute imports, ensure backend_api is properly configured as the package namespace in the generated project.

🤖 Prompt for AI Agents
In copier_template_resources/python_asyncio/asyncio_fixtures.py around lines 4
to 5, the file is importing from backend_api.background_tasks which doesn't
exist; update the imports to reference the local background_tasks module
instead. Replace those imports with relative imports from .background_tasks for
background_task_exceptions and background_tasks_set (i.e., import from the
sibling module in the same package), or if absolute imports are required ensure
the project package backend_api is defined and configured so the absolute import
resolves.

Comment on lines +8 to +11
async def _wait_for_tasks(tasks_list: list[asyncio.Task[None]]):
_, pending = await asyncio.wait(tasks_list, timeout=5.0)
if pending:
raise RuntimeError(f"There are still pending tasks: {pending}")
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Consider task cancellation on timeout.

The _wait_for_tasks function raises an error if tasks are pending after the timeout, but doesn't cancel the tasks. Pending tasks could continue running in the background and cause issues in subsequent tests.

Apply this diff to cancel pending tasks:

 async def _wait_for_tasks(tasks_list: list[asyncio.Task[None]]):
     _, pending = await asyncio.wait(tasks_list, timeout=5.0)
     if pending:
+        for task in pending:
+            task.cancel()
         raise RuntimeError(f"There are still pending tasks: {pending}")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
async def _wait_for_tasks(tasks_list: list[asyncio.Task[None]]):
_, pending = await asyncio.wait(tasks_list, timeout=5.0)
if pending:
raise RuntimeError(f"There are still pending tasks: {pending}")
async def _wait_for_tasks(tasks_list: list[asyncio.Task[None]]):
_, pending = await asyncio.wait(tasks_list, timeout=5.0)
if pending:
for task in pending:
task.cancel()
raise RuntimeError(f"There are still pending tasks: {pending}")
🤖 Prompt for AI Agents
In copier_template_resources/python_asyncio/asyncio_fixtures.py around lines 8
to 11, _wait_for_tasks currently raises if tasks remain pending after the
timeout but does not cancel them; update it to iterate over pending tasks, call
task.cancel() for each, then await their completion (e.g., await
asyncio.gather(*pending, return_exceptions=True) or await asyncio.wait(pending))
to ensure they are cancelled and any exceptions are observed, and finally raise
the RuntimeError (including pending details if desired).

Comment on lines +24 to +29
try:
loop = asyncio.get_running_loop()
except RuntimeError:
asyncio.run(_wait_for_tasks(tasks_list))
else:
loop.run_until_complete(_wait_for_tasks(tasks_list))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

find . -type f -name "asyncio_fixtures.py" | head -20

Repository: LabAutomationAndScreening/copier-python-package-template

Length of output: 162


🏁 Script executed:

cat -n ./copier_template_resources/python_asyncio/asyncio_fixtures.py

Repository: LabAutomationAndScreening/copier-python-package-template

Length of output: 1589


🏁 Script executed:

python3 << 'EOF'
import asyncio

# Test 1: Verify that run_until_complete on running loop raises RuntimeError
async def test_run_until_complete_on_running_loop():
    """Test what happens when calling run_until_complete on a running loop"""
    loop = asyncio.get_running_loop()
    
    async def dummy_coro():
        return "done"
    
    try:
        # This should raise RuntimeError
        loop.run_until_complete(dummy_coro())
        print("ERROR: run_until_complete did not raise RuntimeError")
    except RuntimeError as e:
        print(f"CONFIRMED: run_until_complete raises RuntimeError: {e}")

asyncio.run(test_run_until_complete_on_running_loop())

# Test 2: Verify create_task behavior
print("\n--- Testing create_task behavior ---")
async def test_create_task():
    """Test if create_task waits for completion or just schedules"""
    loop = asyncio.get_running_loop()
    
    async def long_task():
        await asyncio.sleep(0.1)
        return "completed"
    
    task = loop.create_task(long_task())
    print(f"Task created, done={task.done()}")
    # Note: task is still running, not awaited yet
    
    # To verify, we'd need to await it
    result = await task
    print(f"After await: done={task.done()}, result={result}")

asyncio.run(test_create_task())

# Test 3: Verify asyncio.run() outside loop
print("\n--- Testing asyncio.run outside loop ---")
async def simple_coro():
    return "success"

result = asyncio.run(simple_coro())
print(f"asyncio.run() outside loop: {result}")
EOF

Repository: LabAutomationAndScreening/copier-python-package-template

Length of output: 576


🏁 Script executed:

# Check if there are any tests using this fixture and whether they're async
grep -r "fail_on_background_task_errors" --include="*.py" | head -20

Repository: LabAutomationAndScreening/copier-python-package-template

Length of output: 198


🏁 Script executed:

# Check the background_tasks module to understand what we're dealing with
grep -r "background_tasks_set\|background_task_exceptions" --include="*.py" -B 2 -A 2 | head -50

Repository: LabAutomationAndScreening/copier-python-package-template

Length of output: 4405


🏁 Script executed:

# Check if this is used in an async context and review the broader structure
cat -n ./copier_template_resources/python_asyncio/background_tasks.py | head -50

Repository: LabAutomationAndScreening/copier-python-package-template

Length of output: 1950


🏁 Script executed:

# Let me check if there's any documentation or conftest about how pytest-asyncio is configured
find . -name "conftest.py" -o -name "pytest.ini" -o -name "pyproject.toml" | head -10

Repository: LabAutomationAndScreening/copier-python-package-template

Length of output: 151


🏁 Script executed:

cat ./pytest.ini ./pyproject.toml 2>/dev/null | head -100

Repository: LabAutomationAndScreening/copier-python-package-template

Length of output: 1745


🏁 Script executed:

# Let me verify the exact issue - is there a scenario where the fixture runs while an event loop is active?
# Check if tests might be async
grep -r "async def test\|@pytest.mark.asyncio" . --include="*.py" | head -20

Repository: LabAutomationAndScreening/copier-python-package-template

Length of output: 82


🏁 Script executed:

# Let me test what the proper fix should be - we need to wait for tasks even in a running loop
python3 << 'EOF'
import asyncio

# Test proper handling when there's a running loop
async def test_proper_handling():
    """Test how to properly wait for coroutines in a running loop"""
    loop = asyncio.get_running_loop()
    
    async def _wait_for_tasks(tasks_list):
        return "completed"
    
    # When loop is running, we CAN'T use run_until_complete
    # Option 1: gather and await (but we're in async context, need to be in sync fixture)
    # Option 2: Just let the loop continue naturally
    
    tasks_list = []
    
    # What create_task does - schedules but doesn't wait
    task = loop.create_task(_wait_for_tasks(tasks_list))
    print(f"After create_task: done={task.done()}")
    
    # To properly wait in a running loop, we'd need to await
    await task
    print(f"After await: done={task.done()}")

print("--- Testing if fixture context matters ---")
# The fixture is SYNC but tries to handle async context
# If running in sync context: no event loop running
# If running during async test: event loop is running

# For sync fixture calling async code when NO loop exists:
result = asyncio.run(asyncio.sleep(0))
print(f"asyncio.run in sync context: OK")

# But the fixture can't await, so it can't use the loop directly
# The real issue: fixture is sync but tries to manage async cleanup
EOF

Repository: LabAutomationAndScreening/copier-python-package-template

Length of output: 174


🏁 Script executed:

# Let me verify a critical detail: when does the fixture teardown run relative to event loop lifecycle?
python3 << 'EOF'
# Simulate what happens when a sync fixture tries to handle async cleanup
import asyncio

print("=== Scenario 1: Sync fixture, no running loop ===")
# Current code's except branch - CORRECT
async def _wait():
    await asyncio.sleep(0.01)
    return "done"

# asyncio.run works fine
result = asyncio.run(_wait())
print(f"asyncio.run result: {result}")

print("\n=== Scenario 2: Sync fixture, WITH running loop (the problem) ===")
async def simulate_fixture_in_running_loop():
    loop = asyncio.get_running_loop()
    print(f"Loop exists: {loop}")
    
    async def _wait():
        await asyncio.sleep(0.01)
        return "done"
    
    # Current code's else branch - INCORRECT (would raise)
    print("Trying run_until_complete on running loop...")
    try:
        loop.run_until_complete(_wait())
    except RuntimeError as e:
        print(f"  ERROR: {e}")
    
    # Suggested fix's else branch - INCOMPLETE (doesn't wait)
    print("Trying create_task (suggested fix)...")
    task = loop.create_task(_wait())
    print(f"  Task created but NOT awaited: done={task.done()}")
    print(f"  Fixture would return here without waiting!")
    
    # Proper way to handle this scenario in async context
    print("Proper way (await in async context)...")
    result = await asyncio.gather(task)
    print(f"  Task completed: {result}")

asyncio.run(simulate_fixture_in_running_loop())

print("\n=== Conclusion ===")
print("Suggested fix's create_task() doesn't wait for completion!")
EOF

Repository: LabAutomationAndScreening/copier-python-package-template

Length of output: 874


Fix the inverted event loop handling logic.

Lines 24-29 have a critical flaw: when asyncio.get_running_loop() succeeds (returns a loop), the code calls loop.run_until_complete(), which raises RuntimeError on an already-running loop. Conversely, when no loop exists (RuntimeError), it correctly uses asyncio.run().

The logic is backwards. However, the suggested fix of using loop.create_task() is incomplete—it schedules the task without waiting for it to complete, causing the fixture to return before tasks finish.

Since this fixture is synchronous and cannot await, a proper fix requires either:

  1. Making the fixture async: @pytest_asyncio.fixture with async def
  2. Avoiding the running-loop scenario entirely by ensuring background tasks complete before fixture teardown
  3. If a running loop exists during teardown, consider using asyncio.ensure_future() with a polling mechanism (though awkward in a sync context)

First, clarify the expected test execution context: are tests async or sync? This will determine the correct approach.

🤖 Prompt for AI Agents
In copier_template_resources/python_asyncio/asyncio_fixtures.py around lines
24-29 the event-loop handling is inverted and currently schedules tasks without
correctly waiting when a loop is running; first confirm whether tests are
intended to run async or sync. If tests are async, convert the fixture to an
async pytest_asyncio fixture (async def) and await the _wait_for_tasks coroutine
directly. If tests are sync, keep a synchronous fixture and on no-running-loop
use asyncio.run(...), but when a loop is already running use
asyncio.run_coroutine_threadsafe(_wait_for_tasks(...), loop) and block on the
returned future.result(timeout) to ensure tasks complete before teardown;
implement appropriate timeout and error handling. Ensure you do not call
loop.run_until_complete on a running loop.

Comment on lines +40 to +47
configure_vcrpy:
type: bool
help: Should VCRpy be configured for use during unit testing in Python?
default: no
configure_python_asyncio:
type: bool
help: Will python code be using asyncio?
default: no
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

New VCRpy/asyncio flags and Python default are coherent; consider canonical YAML booleans

The configure_vcrpy / configure_python_asyncio questions and the python_version default of 3.12.7 line up with the rest of the template and CI. If you want to quiet YAML linters and be fully canonical, you could switch the new default: no values to default: false (or change all similar defaults in the file in a follow‑up for consistency).

Also applies to: 49-52

🧰 Tools
🪛 YAMLlint (1.37.1)

[warning] 43-43: truthy value should be one of [false, true]

(truthy)


[warning] 47-47: truthy value should be one of [false, true]

(truthy)

🤖 Prompt for AI Agents
In copier.yml around lines 40-47 (and also apply the same change to lines
49-52), the new boolean flags use "default: no", which is non-canonical for YAML
booleans; update those defaults to "default: false" for configure_vcrpy and
configure_python_asyncio (and any other similar boolean defaults in the nearby
block) so linters and YAML parsers see canonical boolean values.

Comment on lines +30 to +31
sudo rm -rf "./$repoName" || true
sudo rm -rf "./$repoName/*.md"
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove dead code on line 31.

Line 30 deletes the entire ./$repoName directory, so line 31 attempting to remove *.md files from that directory has no effect. This is either leftover code from a previous implementation or a logic error.

Apply this diff to remove the dead code:

 sudo rm -rf "./$repoName" || true
-sudo rm -rf "./$repoName/*.md"
 mkdir -p "./$repoName"
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
sudo rm -rf "./$repoName" || true
sudo rm -rf "./$repoName/*.md"
sudo rm -rf "./$repoName" || true
mkdir -p "./$repoName"
🤖 Prompt for AI Agents
In template/.devcontainer/windows-host-helper.sh around lines 30 to 31, line 31
is dead code because line 30 already deletes the entire ./\$repoName directory;
remove the second command (the sudo rm -rf "./\$repoName/*.md" line) so only the
directory removal remains, preserving any needed error-suppression (|| true)
behavior.

sudo rm -rf "./$repoName" || true
sudo rm -rf "./$repoName/*.md"
mkdir -p "./$repoName"
sudo chown -R "$(whoami):$(whoami)" "./$repoName" # TODO: see if this alone is enough to fix everything
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick | 🔵 Trivial

Address or track the TODO comment.

The TODO comment suggests uncertainty about whether the chown command alone is sufficient to resolve permission issues. Consider either validating that this approach works and removing the TODO, or creating an issue to track further investigation.

Would you like me to open a new issue to track validation of this permission fix approach?

🤖 Prompt for AI Agents
In template/.devcontainer/windows-host-helper.sh around line 33, the TODO
comment questions whether the sudo chown -R "$(whoami):$(whoami)" "./$repoName"
line is sufficient to fix permission issues; validate this approach by testing
the script on a representative Windows-hosted devcontainer workflow (confirm
file ownership and ability to read/write/execute needed files), then either
remove the TODO if validation passes or create a tracking issue documenting the
failing scenarios, test steps, and proposed follow-ups if it fails; update the
comment to reference the created issue number if you open one.

Comment on lines +12 to 15
pr-short-num:
description: the last two digits of the PR number (to be used for fixed width naming, like Pulumi stacks)
value: ${{ jobs.get-values.outputs.pr-short-num }}

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Bash substring expression for pr-short-num is invalid and will break the workflow

The line:

short_number=${full_number:${#full_number}<2?0:-2}

is not valid bash syntax. Bash doesn’t support a ternary (? :) directly inside the parameter expansion offset, so this will result in a “bad substitution” at runtime, breaking the Get the PR number step and thus the workflow.

You also currently emit the output with:

echo number=$(echo $short_number) >> $GITHUB_OUTPUT

which:

  • Relies on unnecessary command substitution.
  • Omits quotes, risking issues if the value ever contains whitespace.
  • Is less clear than the usual key=value form.

Recommend replacing the block that computes and writes short_number with something like:

-          short_number=${full_number:${#full_number}<2?0:-2} # only use the last two digits so that the stack name is no more than 7 characters and doesn't get too long. Based on https://stackoverflow.com/questions/19858600/accessing-last-x-characters-of-a-string-in-bash
-
-          echo number=$(echo $short_number) >> $GITHUB_OUTPUT
-          echo "PR number extracted as $full_number and truncated to $short_number"
+          if [ -z "$full_number" ]; then
+            echo "Could not determine PR number from event payload" >&2
+            exit 1
+          fi
+
+          # Use the last two digits, or the whole number if it is shorter than 2 digits
+          if [ ${#full_number} -lt 2 ]; then
+            short_number="$full_number"
+          else
+            short_number="${full_number: -2}"
+          fi
+
+          echo "number=$short_number" >> "$GITHUB_OUTPUT"
+          echo "PR number extracted as $full_number and truncated to $short_number"

This keeps the intended behavior (last two digits, with a safeguard for very small PR numbers) while using valid bash and the standard $GITHUB_OUTPUT format.

Also applies to: 27-30, 49-61

🤖 Prompt for AI Agents
template/.github/workflows/get-values.yaml around lines 12-15 (and similarly at
27-30, 49-61): the current bash substring expression for computing pr-short-num
is invalid and will fail at runtime; replace it with a valid bash sequence that
computes the last two digits (pad with a leading zero when the PR number length
is < 2) using standard parameter expansion or printf (e.g., compute full_number,
use ${full_number: -2} with a space before -2 or use printf "%02d" with
arithmetic), and emit the output using the recommended GitHub Actions format
without unnecessary command substitution like: echo "pr-short-num=<value>" >>
$GITHUB_OUTPUT (ensure the value is quoted and no subshell is used).

@ejfine ejfine merged commit 5c68b7d into main Dec 15, 2025
9 checks passed
@ejfine ejfine deleted the vcrpy branch December 15, 2025 20:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants