-
Notifications
You must be signed in to change notification settings - Fork 1
Copier update: VCRpy #81
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
|
Warning Rate limit exceeded@ejfine has exceeded the limit for the number of commits or files that can be reviewed per hour. Please wait 0 minutes and 33 seconds before requesting another review. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. 📒 Files selected for processing (1)
WalkthroughThis PR introduces asyncio and VCRpy test fixture support, updates numerous dependency and tool versions across the codebase, refactors Windows host helper scripts to use rsync, updates GitHub Actions workflows with enhanced error handling and newer action versions, modifies pre-commit configurations to reflect new generated-code paths, and adjusts configuration exclusions in pyright and gitignore patterns. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes
Possibly related PRs
Pre-merge checks❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 13
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
.github/actions/install_deps/action.yml (1)
59-79: Version bumps for setup-python and setup-node contain breaking changes that require attention
- actions/setup-python@v6.1.0: Node 20→24 upgrade requires GitHub Actions Runner v2.327.1 or later. urllib3 bumped to 2.x with documented breaking changes affecting urllib3 internals.
- actions/setup-node@v6.1.0: Removed always-auth input/handling; removed automatic caching support for non-npm package managers (caching now requires explicit configuration).
- aws-actions/configure-aws-credentials@v5.1.1: Minor release with dependency updates, no breaking changes noted.
Verify that all runners meet the minimum version requirement and remove any always-auth configurations before merging.
template/.github/actions/install_deps/action.yml (1)
59-79: Update workflows to accommodate breaking changes in setup-python@v6.1.0 and setup-node@v6.1.0
- For setup-python@v6.1.0: require GitHub runner v2.327.1 or later due to Node 24 upgrade; verify urllib3 v2 compatibility.
- For setup-node@v6.1.0: remove any
always-authusage and adjust cache inputs (automatic caching now only for npm).- aws-actions/configure-aws-credentials@v5.1.1 introduces no breaking changes.
📜 Review details
Configuration used: Path: .coderabbit.yaml
Review profile: ASSERTIVE
Plan: Pro
⛔ Files ignored due to path filters (1)
uv.lockis excluded by!**/*.lock
📒 Files selected for processing (38)
.copier-answers.yml(2 hunks).devcontainer/devcontainer.json(2 hunks).devcontainer/install-ci-tooling.py(1 hunks).devcontainer/windows-host-helper.sh(1 hunks).github/actions/install_deps/action.yml(2 hunks).github/actions/update-devcontainer-hash/action.yml(1 hunks).github/reusable_workflows/build-docker-image.yaml(1 hunks).github/workflows/ci.yaml(6 hunks).github/workflows/get-values.yaml(2 hunks).github/workflows/hash_git_files.py(1 hunks).github/workflows/pre-commit.yaml(3 hunks).github/workflows/tag-on-merge.yaml(1 hunks).gitignore(1 hunks).pre-commit-config.yaml(6 hunks)copier.yml(1 hunks)copier_template_resources/python_asyncio/asyncio_fixtures.py(1 hunks)copier_template_resources/python_asyncio/background_tasks.py(1 hunks)copier_template_resources/vcrpy_fixtures.py(1 hunks)extensions/context.py(5 hunks)pyproject.toml(1 hunks)pyrightconfig.json(1 hunks)template/.devcontainer/devcontainer.json.jinja(2 hunks)template/.devcontainer/windows-host-helper.sh(1 hunks)template/.github/actions/install_deps/action.yml(2 hunks)template/.github/actions/update-devcontainer-hash/action.yml(1 hunks)template/.github/workflows/get-values.yaml(2 hunks)template/.github/workflows/hash_git_files.py(1 hunks)template/.github/workflows/pre-commit.yaml(3 hunks)template/.gitignore(1 hunks)template/.pre-commit-config.yaml(6 hunks)template/README.md.jinja(1 hunks)template/pyproject.toml.jinja(1 hunks)template/pyrightconfig.json(1 hunks)template/tests/unit/conftest.py.jinja(1 hunks)template/tests/unit/{% if configure_vcrpy %}vcrpy_fixtures.py{% endif %}(1 hunks)tests/copier_data/data1.yaml(1 hunks)tests/copier_data/data2.yaml(1 hunks)tests/copier_data/data3.yaml(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
.github/workflows/hash_git_files.py (1)
template/.github/workflows/hash_git_files.py (1)
find_devcontainer_hash_line(75-88)
🪛 actionlint (1.7.9)
.github/workflows/pre-commit.yaml
72-72: shellcheck reported issue in this script: SC2086:info:6:8: Double quote to prevent globbing and word splitting
(shellcheck)
🪛 YAMLlint (1.37.1)
copier.yml
[warning] 43-43: truthy value should be one of [false, true]
(truthy)
[warning] 47-47: truthy value should be one of [false, true]
(truthy)
🔇 Additional comments (33)
.github/reusable_workflows/build-docker-image.yaml (1)
68-76: Version bumps are consistent and aligned across the repoThe upgrades to
actions/checkout@v6.0.1andaws-actions/configure-aws-credentials@v5.1.1are used consistently throughout the repo, and the surrounding workflow logic and inputs remain unchanged.template/.gitignore (1)
79-80: Recursive.DS_Storeignore pattern is appropriateSwitching to
**/.DS_Storegives consistent ignore coverage for macOS metadata files at any depth and is aligned with common practice..gitignore (1)
79-80: Consistent.DS_Storeignore across repo and templateUsing
**/.DS_Storehere to match the template keeps behavior consistent and correctly ignores macOS metadata files anywhere in the tree.template/pyrightconfig.json (1)
13-18: Updated pyright excludes align with generated/template resourcesExcluding
**/generated/graphql,**/venv, and**/copier_template_resourcesis consistent with keeping pyright focused on non-generated code and avoiding noise from template resource modules, especially given these are support files rather than part of the user-facing template.template/.github/workflows/pre-commit.yaml (2)
61-79: Improved pre-commit diagnostics look correctCapturing the pre-commit exit code, printing a clear failure message, and showing
git diffbefore exiting with the same code gives better feedback without altering success behavior.
34-44: Checkout action version and configuration are appropriate
actions/checkout@v6.0.1is the current recommended version. The use ofpersist-credentials: falsealigns with best practices for pre-commit workflows, which only require read access to the repository. v6 improved credential security by storing tokens in$RUNNER_TEMPrather than.git/config, making this configuration safe and idiomatic.pyrightconfig.json (1)
13-18: Root pyright excludes correctly mirror template configurationUsing
**/generated/graphql,**/venv, and**/copier_template_resourceshere keeps the root pyright behavior in sync with the template and avoids type-checking generated/template-resource code.template/.github/actions/update-devcontainer-hash/action.yml (1)
29-35: Explicit ref with updated checkout action looks goodUsing
actions/checkout@v6.0.1and wiringref: ${{ inputs.branch }}makes the composite action deterministic about which branch it updates. The version is stable (released Dec 2, 2025), and bothrefandpersist-credentialsremain fully supported with no breaking changes to their basic usage.pyproject.toml (1)
7-14: Pytest version bump to 9.0.2 is safe with no compatibility concernsThe bump to
pytest>=9.0.2is straightforward. Bothpytest-cov>=7.0.0andpytest-randomly>=4.0.1are confirmed compatible; pytest 9.0.2 includes compatibility fixes from earlier 9.0.x issues and introduces no new breaking changes affecting plugin usage..devcontainer/devcontainer.json (1)
19-20: Devcontainer extension bump and regenerated context hash look correctThe coderabbit VS Code extension version bump and the updated devcontainer context hash comment are consistent with the hash_git_files.py logic and don’t change runtime behavior of the container itself. Assuming the extension version exists and the hash was regenerated via the hook, this is good to go.
You may want to run the devcontainer context hash pre-commit hook once locally (or
pre-commit run compute-devcontainer-context-hash --all-files) to confirm the hash comment stays stable across runs.Also applies to: 61-61
template/.github/workflows/hash_git_files.py (1)
57-71: Safer exception handling and more robust hash-line searchNarrowing the catch in
compute_adler32toIsADirectoryErroravoids hiding unexpected IO issues while still skipping directory-like entries, which is a solid improvement. Scanninglinesin reverse infind_devcontainer_hash_linecorrectly biases toward the final closing brace, making the hash insertion logic more resilient to nested objects earlier in the file.Also applies to: 75-88
template/.pre-commit-config.yaml (1)
45-45: Template pre-commit updates stay consistent with root configurationThe template’s hook revisions and the move to
.*/generated/graphql/.*in excludes mirror the root.pre-commit-config.yaml, which keeps generated projects aligned with this repo’s tooling. No behavioral regressions are apparent as long as generated GraphQL now lives undergenerated/graphql.After updating the template, it’s worth instantiating a sample project (with GraphQL enabled) and running
pre-commit run --all-filesthere to ensure the excludes match the generated layout.Also applies to: 111-111, 198-198, 252-253, 260-261, 270-271, 277-279, 282-282, 289-291, 305-307
template/README.md.jinja (1)
9-10: Improved conditional rendering for Python and Codecov badgesScoping the Python Versions badge under
if not is_frozen_executableand the Codecov badge underif use_codecovmatches the intended semantics: no PyPI-related badges for frozen-only distributions and no Codecov badge unless coverage reporting is configured. The{% raw %}/{% endraw %}juggling remains balanced.When you next apply the template, check the rendered README for a few combinations (
is_frozen_executabletrue/false,use_codecovtrue/false) to confirm the expected badges appear.extensions/context.py (1)
13-116: Version matrix updates and relaxed constraints look consistent, but warrant end-to-end verificationThe updated versions for Python tooling (pytest/Sphinx/pydantic/pyinstaller), async/VCR stack (
vcrpy,pytest-recording,pytest-asyncio), JS/Node/Nuxt ecosystem, container images, and GitHub Actions all appear internally consistent, and shifting some specs to>=...gives downstream projects more flexibility.Given how central this context is to generated projects and CI, it’s important to verify:
- pyproject.toml / template dependencies interpolate these new strings correctly (especially the
>=...specs).- The new runner labels and action versions (
ubuntu-24.04,windows-2025, updatedgha_*) are supported in your target GitHub environment.- A freshly generated project passes its full CI using these versions.
Consider generating at least one project for each major configuration (sync/async, GraphQL, VCRpy enabled, etc.), then running its CI or at least
uv lock,pytest, and the GitHub Actions workflows to confirm all version combinations are valid.template/tests/unit/{% if configure_vcrpy %}vcrpy_fixtures.py{% endif %} (1)
1-1: VCRpy fixture indirection follows existing template test patternPointing the conditional
vcrpy_fixtures.pytest file at../../../copier_template_resources/vcrpy_fixtures.pyis consistent with the other template test resources and keeps the shared fixture implementation centralized. The relative path looks correct fromtemplate/tests/unitback to the repo root..pre-commit-config.yaml (1)
45-45: Bump pre-commit hook revisions and adjust GraphQL excludes
All hook revisions (typos, mirrors-prettier, check-jsonschema, ruff, pylint) have been updated, and exclusion patterns for generated GraphQL files now use.*/generated/graphql/.*. Runpre-commit run --all-filesto confirm the new revisions and patterns behave as expected.
Applies to .pre-commit-config.yaml (lines 45, 111, 198, 252–253, 260–261, 270–271, 277–279, 282, 289–291, 305–307).github/actions/update-devcontainer-hash/action.yml (1)
29-35: Checkout version bump and explicitreflook correctUsing
actions/checkout@v6.0.1and wiringref: ${{ inputs.branch }}is consistent with the action’sbranchinput and the latergit push origin HEAD:${{ inputs.branch }}usage. This should ensure the intended branch tip is updated.tests/copier_data/data1.yaml (1)
10-11: New VCRpy/asyncio flags aligned with template usage
configure_vcrpy: trueandconfigure_python_asyncio: truecleanly exercise the “both enabled” path and match the naming used elsewhere in the template.tests/copier_data/data3.yaml (1)
10-11: Mixed VCRpy/asyncio configuration looks intentionalThis
false/truecombination gives coverage for “asyncio without VCRpy” and keeps key names consistent with other copier test data files..github/workflows/tag-on-merge.yaml (1)
17-21: Tag workflow checkout bump is straightforwardUpgrading to
actions/checkout@v6.0.1with the existingwithparameters (merge commit SHA, full history, no credentials) preserves behavior while aligning with newer actions..devcontainer/install-ci-tooling.py (1)
10-11: UV/PNPM version pin bumps are isolated and safeUpdating
UV_VERSIONto"0.9.17"andPNPM_VERSIONto"10.25.0"keeps the installer logic unchanged and just refreshes tool versions; behavior should remain the same assuming those versions are available upstream.tests/copier_data/data2.yaml (1)
10-11: Baseline VCRpy/asyncio-disabled config looks goodThese flags give a clear “both disabled” test scenario and are consistent with the other copier data variants.
.github/workflows/pre-commit.yaml (1)
34-46: Pre-commit workflow action upgrades and caching look solidUsing
actions/checkout@v6.0.1in both checkout paths andactions/cache@v4.3.0for pre-commit hooks is consistent with the rest of the PR. The enhanced pre-commit step with explicit exit-code capture and a diff dump should make failures easier to debug.Also applies to: 61-70
.copier-answers.yml (1)
2-16: Upstream commit bump and new template feature hints are consistentTracking
_commit: v0.0.90plus addingtemplate_might_want_to_use_python_asyncioandtemplate_might_want_to_use_vcrpyaligns this answers file with the new template capabilities and the copier test data you added..github/workflows/hash_git_files.py (1)
68-70: Exception narrowing and reverse search logic look solidCatching only
IsADirectoryErrorkeeps real file IO issues visible while handling the Windows/symlink edge case, and thereversed(range(len(lines)))loop is clear and equivalent to the prior behavior for locating the closing brace and hash line.Also applies to: 77-87
template/pyproject.toml.jinja (1)
31-35: Confirm version specifiers forpytest-recordingandvcrpyThe conditional wiring around
configure_vcrpy,create_docs, andis_frozen_executablelooks good, but please confirm how the version variables are defined:
- If
pytest_recording_version/vcrpy_versionare bare versions like1.2.3, you probably want:- "pytest-recording{% endraw %}{{ pytest_recording_version }}{% raw %}", - "vcrpy{% endraw %}{{ vcrpy_version }}{% raw %}", + "pytest-recording>={% endraw %}{{ pytest_recording_version }}{% raw %}", + "vcrpy>={% endraw %}{{ vcrpy_version }}{% raw %}",
- If they already include the comparison operator (e.g.
==1.2.3), the current template is fine.template/.devcontainer/devcontainer.json.jinja (1)
5-10: Devcontainer AWS feature gating and extension bumps look consistentConditionally including the AWS CLI devcontainer feature and AWS Toolkit extension only when not a child of the base template is a good way to avoid duplication, and the
coderabbit-vscodeversion bump is straightforward. Just ensure the new feature and extension versions match what your devcontainer base images and VS Code environment expect.Also applies to: 28-33
.github/workflows/ci.yaml (2)
23-24: Job dependencies and Python version wiring are coherentChaining
get-values → pre-commit → lint-matrix → required-checkvianeedsmakes the workflow ordering explicit, and passingpython-version: 3.12.7into the reusable pre-commit workflow keeps it aligned with the lint matrix. Just be aware thatrequired-checkwill still fail iflint-matrixends in a non-success state (includingskipped), which appears to be the current intended behavior.Also applies to: 29-30, 33-35, 146-148
59-59: Action bumps and enhanced pre-commit handling look good; confirm compatibility
- Updating to newer
actions/checkoutandactions/cacheversions plus caching pre-commit hooks is standard and should improve stability/perf.- The updated pre-commit step correctly captures a non-zero exit code, prints a diff for debugging, and then exits with the same code, while still allowing success to short-circuit.
Please confirm:
- The new action versions (lines 59, 116) are supported by your runners/images.
- The pre-commit reusable workflow/entrypoint (lines 125-135) doesn't itself assume different SKIP semantics or Python versions.
.github/workflows/get-values.yaml (1)
12-14: Newpr-short-numoutput wiring and checkout bump look fine; verify callers handle non‑PR eventsThe new
pr-short-numoutput correctly forwardssteps.find-pr-num.outputs.number, and the checkout bump is straightforward. Just ensure:
- Any workflows calling this one tolerate
pr-short-numbeing empty when the event is notpull_request/merge_group.- No callers assume this output is always set.
You can sanity-check usage by searching for all workflow_call usages of
get-values.yamlin.github/workflowsand inspecting howpr-short-numis consumed.Also applies to: 27-29, 38-38
template/.devcontainer/windows-host-helper.sh (1)
46-50: LGTM! Rsync approach is clean and efficient.The rsync implementation properly handles the merge with appropriate exclusions for volume mounts. The
-aflag preserves permissions and timestamps, and the explicit excludes prevent overwriting mounted volumes.copier_template_resources/vcrpy_fixtures.py (2)
18-21: LGTM! Windows-specific network handling is well-documented.The comment clearly explains why ALLOWED_HOSTS needs special handling on Windows and acknowledges the trade-off of reduced safety in CI while still maintaining checks on Linux.
39-43: The defensive type casting and assertion are appropriate for this use case.VCR.py documentation confirms
match_onaccepts a sequence (list or tuple) of matcher names. Casting totuple[str, ...]before the concatenation operation is the right approach, and the runtimeisinstancecheck is good defensive programming given that VCR lacks complete type annotations. The approach is sound.
| sudo rm -rf "./$repoName" || true | ||
| sudo rm -rf "./$repoName/*.md" | ||
| mkdir -p "./$repoName" | ||
| sudo chown -R "$(whoami):$(whoami)" "./$repoName" # TODO: see if this alone is enough to fix everything |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Rsync-based sync is a nice simplification; drop redundant rm line
The new flow (fresh clone into a tmp dir, rsync -av into ./$repoName with excludes, then cleanup) is much simpler and safer than the previous manual copy logic. One nit:
sudo rm -rf "./$repoName"already removes the entire target directory; the subsequentsudo rm -rf "./$repoName/*.md"is redundant, and the quotes prevent glob expansion anyway. It can be removed to avoid confusion.
Also applies to: 38-50, 52-56
🤖 Prompt for AI Agents
.devcontainer/windows-host-helper.sh around lines 30 to 33: the second removal
command `sudo rm -rf "./$repoName/*.md"` is redundant because `sudo rm -rf
"./$repoName"` already deletes the directory and the quotes prevent glob
expansion; remove that redundant line here and also remove the identical
redundant rm lines in the other ranges (lines 38-50 and 52-56) referenced in the
comment; if you actually intended to only remove markdown files instead of the
whole directory, replace the whole-dir rm with a properly unquoted glob-aware
command or use find to target *.md, but per the review just delete the redundant
`rm -rf "./$repoName/*.md"` occurrences.
| - name: Run pre-commit | ||
| run: pre-commit run -a | ||
| run: | | ||
| pre-commit run -a || PRE_COMMIT_EXIT_CODE=$? | ||
| if [ -n "$PRE_COMMIT_EXIT_CODE" ]; then | ||
| echo "Pre-commit failed with exit code $PRE_COMMIT_EXIT_CODE" | ||
| echo "Showing git diff:" | ||
| git --no-pager diff | ||
| exit $PRE_COMMIT_EXIT_CODE | ||
| fi |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Quote PRE_COMMIT_EXIT_CODE when exiting to satisfy shellcheck
The new failure-handling block is sensible, but shellcheck’s SC2086 warning is likely about unquoted $PRE_COMMIT_EXIT_CODE in the exit command. Quoting it avoids word-splitting/globbing warnings and keeps linters happy without changing behavior.
You can address this with a minimal tweak:
- pre-commit run -a || PRE_COMMIT_EXIT_CODE=$?
- if [ -n "$PRE_COMMIT_EXIT_CODE" ]; then
- echo "Pre-commit failed with exit code $PRE_COMMIT_EXIT_CODE"
+ pre-commit run -a || PRE_COMMIT_EXIT_CODE=$?
+ if [ -n "$PRE_COMMIT_EXIT_CODE" ]; then
+ echo "Pre-commit failed with exit code $PRE_COMMIT_EXIT_CODE"
echo "Showing git diff:"
git --no-pager diff
- exit $PRE_COMMIT_EXIT_CODE
+ exit "$PRE_COMMIT_EXIT_CODE"
fi🧰 Tools
🪛 actionlint (1.7.9)
72-72: shellcheck reported issue in this script: SC2086:info:6:8: Double quote to prevent globbing and word splitting
(shellcheck)
🤖 Prompt for AI Agents
In .github/workflows/pre-commit.yaml around lines 71 to 79, the exit command
uses an unquoted shell variable which triggers shellcheck SC2086; update the
exit invocation to quote the variable (exit "$PRE_COMMIT_EXIT_CODE") so the
value is passed safely without word-splitting or globbing while preserving
behavior.
| from backend_api.background_tasks import background_task_exceptions | ||
| from backend_api.background_tasks import background_tasks_set |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Description: Verify backend_api.background_tasks module exists and exports required symbols
# Search for the background_tasks module
echo "=== Searching for background_tasks.py ==="
fd -t f "background_tasks.py"
echo -e "\n=== Searching for symbol definitions ==="
# Search for definitions of the required globals
rg -n "background_task_exceptions|background_tasks_set" --type py -C 3Repository: LabAutomationAndScreening/copier-python-package-template
Length of output: 6049
🏁 Script executed:
#!/bin/bash
# Check if backend_api package exists
echo "=== Searching for backend_api package ==="
fd -t d "backend_api" --max-depth 3
echo -e "\n=== Checking for __init__.py in backend_api ==="
find . -path "*/backend_api/__init__.py"
echo -e "\n=== Checking if asyncio_fixtures.py is in a template ==="
# Look at the directory structure
ls -la copier_template_resources/python_asyncio/Repository: LabAutomationAndScreening/copier-python-package-template
Length of output: 497
Fix incorrect import paths in asyncio_fixtures.py.
The imports reference backend_api.background_tasks, but this module does not exist in the codebase. The symbols background_task_exceptions and background_tasks_set are defined in copier_template_resources/python_asyncio/background_tasks.py in the same directory. Update the imports to use relative imports:
from .background_tasks import background_task_exceptions
from .background_tasks import background_tasks_setAlternatively, if this template requires absolute imports, ensure backend_api is properly configured as the package namespace in the generated project.
🤖 Prompt for AI Agents
In copier_template_resources/python_asyncio/asyncio_fixtures.py around lines 4
to 5, the file is importing from backend_api.background_tasks which doesn't
exist; update the imports to reference the local background_tasks module
instead. Replace those imports with relative imports from .background_tasks for
background_task_exceptions and background_tasks_set (i.e., import from the
sibling module in the same package), or if absolute imports are required ensure
the project package backend_api is defined and configured so the absolute import
resolves.
| async def _wait_for_tasks(tasks_list: list[asyncio.Task[None]]): | ||
| _, pending = await asyncio.wait(tasks_list, timeout=5.0) | ||
| if pending: | ||
| raise RuntimeError(f"There are still pending tasks: {pending}") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider task cancellation on timeout.
The _wait_for_tasks function raises an error if tasks are pending after the timeout, but doesn't cancel the tasks. Pending tasks could continue running in the background and cause issues in subsequent tests.
Apply this diff to cancel pending tasks:
async def _wait_for_tasks(tasks_list: list[asyncio.Task[None]]):
_, pending = await asyncio.wait(tasks_list, timeout=5.0)
if pending:
+ for task in pending:
+ task.cancel()
raise RuntimeError(f"There are still pending tasks: {pending}")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| async def _wait_for_tasks(tasks_list: list[asyncio.Task[None]]): | |
| _, pending = await asyncio.wait(tasks_list, timeout=5.0) | |
| if pending: | |
| raise RuntimeError(f"There are still pending tasks: {pending}") | |
| async def _wait_for_tasks(tasks_list: list[asyncio.Task[None]]): | |
| _, pending = await asyncio.wait(tasks_list, timeout=5.0) | |
| if pending: | |
| for task in pending: | |
| task.cancel() | |
| raise RuntimeError(f"There are still pending tasks: {pending}") |
🤖 Prompt for AI Agents
In copier_template_resources/python_asyncio/asyncio_fixtures.py around lines 8
to 11, _wait_for_tasks currently raises if tasks remain pending after the
timeout but does not cancel them; update it to iterate over pending tasks, call
task.cancel() for each, then await their completion (e.g., await
asyncio.gather(*pending, return_exceptions=True) or await asyncio.wait(pending))
to ensure they are cancelled and any exceptions are observed, and finally raise
the RuntimeError (including pending details if desired).
| try: | ||
| loop = asyncio.get_running_loop() | ||
| except RuntimeError: | ||
| asyncio.run(_wait_for_tasks(tasks_list)) | ||
| else: | ||
| loop.run_until_complete(_wait_for_tasks(tasks_list)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "asyncio_fixtures.py" | head -20Repository: LabAutomationAndScreening/copier-python-package-template
Length of output: 162
🏁 Script executed:
cat -n ./copier_template_resources/python_asyncio/asyncio_fixtures.pyRepository: LabAutomationAndScreening/copier-python-package-template
Length of output: 1589
🏁 Script executed:
python3 << 'EOF'
import asyncio
# Test 1: Verify that run_until_complete on running loop raises RuntimeError
async def test_run_until_complete_on_running_loop():
"""Test what happens when calling run_until_complete on a running loop"""
loop = asyncio.get_running_loop()
async def dummy_coro():
return "done"
try:
# This should raise RuntimeError
loop.run_until_complete(dummy_coro())
print("ERROR: run_until_complete did not raise RuntimeError")
except RuntimeError as e:
print(f"CONFIRMED: run_until_complete raises RuntimeError: {e}")
asyncio.run(test_run_until_complete_on_running_loop())
# Test 2: Verify create_task behavior
print("\n--- Testing create_task behavior ---")
async def test_create_task():
"""Test if create_task waits for completion or just schedules"""
loop = asyncio.get_running_loop()
async def long_task():
await asyncio.sleep(0.1)
return "completed"
task = loop.create_task(long_task())
print(f"Task created, done={task.done()}")
# Note: task is still running, not awaited yet
# To verify, we'd need to await it
result = await task
print(f"After await: done={task.done()}, result={result}")
asyncio.run(test_create_task())
# Test 3: Verify asyncio.run() outside loop
print("\n--- Testing asyncio.run outside loop ---")
async def simple_coro():
return "success"
result = asyncio.run(simple_coro())
print(f"asyncio.run() outside loop: {result}")
EOFRepository: LabAutomationAndScreening/copier-python-package-template
Length of output: 576
🏁 Script executed:
# Check if there are any tests using this fixture and whether they're async
grep -r "fail_on_background_task_errors" --include="*.py" | head -20Repository: LabAutomationAndScreening/copier-python-package-template
Length of output: 198
🏁 Script executed:
# Check the background_tasks module to understand what we're dealing with
grep -r "background_tasks_set\|background_task_exceptions" --include="*.py" -B 2 -A 2 | head -50Repository: LabAutomationAndScreening/copier-python-package-template
Length of output: 4405
🏁 Script executed:
# Check if this is used in an async context and review the broader structure
cat -n ./copier_template_resources/python_asyncio/background_tasks.py | head -50Repository: LabAutomationAndScreening/copier-python-package-template
Length of output: 1950
🏁 Script executed:
# Let me check if there's any documentation or conftest about how pytest-asyncio is configured
find . -name "conftest.py" -o -name "pytest.ini" -o -name "pyproject.toml" | head -10Repository: LabAutomationAndScreening/copier-python-package-template
Length of output: 151
🏁 Script executed:
cat ./pytest.ini ./pyproject.toml 2>/dev/null | head -100Repository: LabAutomationAndScreening/copier-python-package-template
Length of output: 1745
🏁 Script executed:
# Let me verify the exact issue - is there a scenario where the fixture runs while an event loop is active?
# Check if tests might be async
grep -r "async def test\|@pytest.mark.asyncio" . --include="*.py" | head -20Repository: LabAutomationAndScreening/copier-python-package-template
Length of output: 82
🏁 Script executed:
# Let me test what the proper fix should be - we need to wait for tasks even in a running loop
python3 << 'EOF'
import asyncio
# Test proper handling when there's a running loop
async def test_proper_handling():
"""Test how to properly wait for coroutines in a running loop"""
loop = asyncio.get_running_loop()
async def _wait_for_tasks(tasks_list):
return "completed"
# When loop is running, we CAN'T use run_until_complete
# Option 1: gather and await (but we're in async context, need to be in sync fixture)
# Option 2: Just let the loop continue naturally
tasks_list = []
# What create_task does - schedules but doesn't wait
task = loop.create_task(_wait_for_tasks(tasks_list))
print(f"After create_task: done={task.done()}")
# To properly wait in a running loop, we'd need to await
await task
print(f"After await: done={task.done()}")
print("--- Testing if fixture context matters ---")
# The fixture is SYNC but tries to handle async context
# If running in sync context: no event loop running
# If running during async test: event loop is running
# For sync fixture calling async code when NO loop exists:
result = asyncio.run(asyncio.sleep(0))
print(f"asyncio.run in sync context: OK")
# But the fixture can't await, so it can't use the loop directly
# The real issue: fixture is sync but tries to manage async cleanup
EOFRepository: LabAutomationAndScreening/copier-python-package-template
Length of output: 174
🏁 Script executed:
# Let me verify a critical detail: when does the fixture teardown run relative to event loop lifecycle?
python3 << 'EOF'
# Simulate what happens when a sync fixture tries to handle async cleanup
import asyncio
print("=== Scenario 1: Sync fixture, no running loop ===")
# Current code's except branch - CORRECT
async def _wait():
await asyncio.sleep(0.01)
return "done"
# asyncio.run works fine
result = asyncio.run(_wait())
print(f"asyncio.run result: {result}")
print("\n=== Scenario 2: Sync fixture, WITH running loop (the problem) ===")
async def simulate_fixture_in_running_loop():
loop = asyncio.get_running_loop()
print(f"Loop exists: {loop}")
async def _wait():
await asyncio.sleep(0.01)
return "done"
# Current code's else branch - INCORRECT (would raise)
print("Trying run_until_complete on running loop...")
try:
loop.run_until_complete(_wait())
except RuntimeError as e:
print(f" ERROR: {e}")
# Suggested fix's else branch - INCOMPLETE (doesn't wait)
print("Trying create_task (suggested fix)...")
task = loop.create_task(_wait())
print(f" Task created but NOT awaited: done={task.done()}")
print(f" Fixture would return here without waiting!")
# Proper way to handle this scenario in async context
print("Proper way (await in async context)...")
result = await asyncio.gather(task)
print(f" Task completed: {result}")
asyncio.run(simulate_fixture_in_running_loop())
print("\n=== Conclusion ===")
print("Suggested fix's create_task() doesn't wait for completion!")
EOFRepository: LabAutomationAndScreening/copier-python-package-template
Length of output: 874
Fix the inverted event loop handling logic.
Lines 24-29 have a critical flaw: when asyncio.get_running_loop() succeeds (returns a loop), the code calls loop.run_until_complete(), which raises RuntimeError on an already-running loop. Conversely, when no loop exists (RuntimeError), it correctly uses asyncio.run().
The logic is backwards. However, the suggested fix of using loop.create_task() is incomplete—it schedules the task without waiting for it to complete, causing the fixture to return before tasks finish.
Since this fixture is synchronous and cannot await, a proper fix requires either:
- Making the fixture async:
@pytest_asyncio.fixturewithasync def - Avoiding the running-loop scenario entirely by ensuring background tasks complete before fixture teardown
- If a running loop exists during teardown, consider using
asyncio.ensure_future()with a polling mechanism (though awkward in a sync context)
First, clarify the expected test execution context: are tests async or sync? This will determine the correct approach.
🤖 Prompt for AI Agents
In copier_template_resources/python_asyncio/asyncio_fixtures.py around lines
24-29 the event-loop handling is inverted and currently schedules tasks without
correctly waiting when a loop is running; first confirm whether tests are
intended to run async or sync. If tests are async, convert the fixture to an
async pytest_asyncio fixture (async def) and await the _wait_for_tasks coroutine
directly. If tests are sync, keep a synchronous fixture and on no-running-loop
use asyncio.run(...), but when a loop is already running use
asyncio.run_coroutine_threadsafe(_wait_for_tasks(...), loop) and block on the
returned future.result(timeout) to ensure tasks complete before teardown;
implement appropriate timeout and error handling. Ensure you do not call
loop.run_until_complete on a running loop.
| configure_vcrpy: | ||
| type: bool | ||
| help: Should VCRpy be configured for use during unit testing in Python? | ||
| default: no | ||
| configure_python_asyncio: | ||
| type: bool | ||
| help: Will python code be using asyncio? | ||
| default: no |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
New VCRpy/asyncio flags and Python default are coherent; consider canonical YAML booleans
The configure_vcrpy / configure_python_asyncio questions and the python_version default of 3.12.7 line up with the rest of the template and CI. If you want to quiet YAML linters and be fully canonical, you could switch the new default: no values to default: false (or change all similar defaults in the file in a follow‑up for consistency).
Also applies to: 49-52
🧰 Tools
🪛 YAMLlint (1.37.1)
[warning] 43-43: truthy value should be one of [false, true]
(truthy)
[warning] 47-47: truthy value should be one of [false, true]
(truthy)
🤖 Prompt for AI Agents
In copier.yml around lines 40-47 (and also apply the same change to lines
49-52), the new boolean flags use "default: no", which is non-canonical for YAML
booleans; update those defaults to "default: false" for configure_vcrpy and
configure_python_asyncio (and any other similar boolean defaults in the nearby
block) so linters and YAML parsers see canonical boolean values.
| sudo rm -rf "./$repoName" || true | ||
| sudo rm -rf "./$repoName/*.md" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove dead code on line 31.
Line 30 deletes the entire ./$repoName directory, so line 31 attempting to remove *.md files from that directory has no effect. This is either leftover code from a previous implementation or a logic error.
Apply this diff to remove the dead code:
sudo rm -rf "./$repoName" || true
-sudo rm -rf "./$repoName/*.md"
mkdir -p "./$repoName"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| sudo rm -rf "./$repoName" || true | |
| sudo rm -rf "./$repoName/*.md" | |
| sudo rm -rf "./$repoName" || true | |
| mkdir -p "./$repoName" |
🤖 Prompt for AI Agents
In template/.devcontainer/windows-host-helper.sh around lines 30 to 31, line 31
is dead code because line 30 already deletes the entire ./\$repoName directory;
remove the second command (the sudo rm -rf "./\$repoName/*.md" line) so only the
directory removal remains, preserving any needed error-suppression (|| true)
behavior.
| sudo rm -rf "./$repoName" || true | ||
| sudo rm -rf "./$repoName/*.md" | ||
| mkdir -p "./$repoName" | ||
| sudo chown -R "$(whoami):$(whoami)" "./$repoName" # TODO: see if this alone is enough to fix everything |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Address or track the TODO comment.
The TODO comment suggests uncertainty about whether the chown command alone is sufficient to resolve permission issues. Consider either validating that this approach works and removing the TODO, or creating an issue to track further investigation.
Would you like me to open a new issue to track validation of this permission fix approach?
🤖 Prompt for AI Agents
In template/.devcontainer/windows-host-helper.sh around line 33, the TODO
comment questions whether the sudo chown -R "$(whoami):$(whoami)" "./$repoName"
line is sufficient to fix permission issues; validate this approach by testing
the script on a representative Windows-hosted devcontainer workflow (confirm
file ownership and ability to read/write/execute needed files), then either
remove the TODO if validation passes or create a tracking issue documenting the
failing scenarios, test steps, and proposed follow-ups if it fails; update the
comment to reference the created issue number if you open one.
| pr-short-num: | ||
| description: the last two digits of the PR number (to be used for fixed width naming, like Pulumi stacks) | ||
| value: ${{ jobs.get-values.outputs.pr-short-num }} | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bash substring expression for pr-short-num is invalid and will break the workflow
The line:
short_number=${full_number:${#full_number}<2?0:-2}is not valid bash syntax. Bash doesn’t support a ternary (? :) directly inside the parameter expansion offset, so this will result in a “bad substitution” at runtime, breaking the Get the PR number step and thus the workflow.
You also currently emit the output with:
echo number=$(echo $short_number) >> $GITHUB_OUTPUTwhich:
- Relies on unnecessary command substitution.
- Omits quotes, risking issues if the value ever contains whitespace.
- Is less clear than the usual
key=valueform.
Recommend replacing the block that computes and writes short_number with something like:
- short_number=${full_number:${#full_number}<2?0:-2} # only use the last two digits so that the stack name is no more than 7 characters and doesn't get too long. Based on https://stackoverflow.com/questions/19858600/accessing-last-x-characters-of-a-string-in-bash
-
- echo number=$(echo $short_number) >> $GITHUB_OUTPUT
- echo "PR number extracted as $full_number and truncated to $short_number"
+ if [ -z "$full_number" ]; then
+ echo "Could not determine PR number from event payload" >&2
+ exit 1
+ fi
+
+ # Use the last two digits, or the whole number if it is shorter than 2 digits
+ if [ ${#full_number} -lt 2 ]; then
+ short_number="$full_number"
+ else
+ short_number="${full_number: -2}"
+ fi
+
+ echo "number=$short_number" >> "$GITHUB_OUTPUT"
+ echo "PR number extracted as $full_number and truncated to $short_number"This keeps the intended behavior (last two digits, with a safeguard for very small PR numbers) while using valid bash and the standard $GITHUB_OUTPUT format.
Also applies to: 27-30, 49-61
🤖 Prompt for AI Agents
template/.github/workflows/get-values.yaml around lines 12-15 (and similarly at
27-30, 49-61): the current bash substring expression for computing pr-short-num
is invalid and will fail at runtime; replace it with a valid bash sequence that
computes the last two digits (pad with a leading zero when the PR number length
is < 2) using standard parameter expansion or printf (e.g., compute full_number,
use ${full_number: -2} with a space before -2 or use printf "%02d" with
arithmetic), and emit the output using the recommended GitHub Actions format
without unnecessary command substitution like: echo "pr-short-num=<value>" >>
$GITHUB_OUTPUT (ensure the value is quoted and no subshell is used).
Pull in upstream template changes for VCRpy config
Summary by CodeRabbit
New Features
Improvements
✏️ Tip: You can customize this high-level summary in your review settings.