Skip to content

Proposal: Adopt proven anyio/Trio patterns natively into asyncio (multi-release roadmap) #144592

@kovan

Description

@kovan

Proposal: Adopt proven anyio/Trio patterns natively into asyncio (multi-release roadmap)

Executive Summary

Over the past four years, asyncio has successfully adopted structured concurrency patterns pioneered by Trio and refined by anyio. asyncio.TaskGroup (3.11) and asyncio.timeout() (3.11) are direct descendants of Trio's nurseries and anyio's fail_after(), and they have been widely praised. ExceptionGroup (3.11), eager_task_factory (3.12), and PEP 789 (in progress) continue the trend. This proposal argues that the adoption should continue, because anyio still solves a significant number of pain points that asyncio users hit every day — pain points already documented across dozens of open CPython issues and discuss.python.org threads.

This is not a proposal to add anyio as a dependency, nor to replicate its entire surface area. It is a curated, prioritized list of APIs and patterns where (a) the design has been battle-tested in production by hundreds of thousands of projects, (b) the gap in asyncio causes real bugs or forces users into fragile workarounds, and (c) existing CPython issues already demonstrate community demand. anyio has over 400 million monthly PyPI downloads and is a transitive dependency of FastAPI/Starlette, httpx, the Anthropic SDK, the OpenAI SDK, and Prefect, among many others.

The items below are organized into tiers by impact and design readiness. Tier 1 items have clear semantics, existing CPython issues with strong consensus, and direct precedent in anyio's stable API. Tier 2 and 3 items need more design discussion but represent real gaps. This would be a multi-release effort spanning Python 3.15 through 3.17 or beyond.


Precedent: asyncio Already Adopted from Trio/anyio

This is not unprecedented — it is the continuation of an established pattern:

Python version Feature added to asyncio Inspired by
3.11 asyncio.TaskGroup Trio's open_nursery() / anyio's create_task_group()
3.11 asyncio.timeout() / asyncio.timeout_at() anyio's fail_after() / fail_at()
3.11 BaseExceptionGroup Trio's MultiError / anyio's exception handling
3.12 asyncio.eager_task_factory Trio's synchronous-until-first-yield semantics
In progress PEP 789 — Limiting yield in async generators Trio's cancel scope / generator interaction analysis

Every one of these adoptions improved asyncio. None required adding Trio or anyio as a dependency. The pattern works.


Tier 1: High Impact, Well-Designed, Clear Precedent

These features have stable APIs in anyio, existing CPython issues with community demand, and well-understood semantics. They could land with minimal design debate.


1.1 CancelScope — General-Purpose Cancellation Scope

The pain point. asyncio's cancellation model has no general-purpose scope object. Task.cancel() injects a single CancelledError from the outside, which can be accidentally swallowed — or worse, misattributed when nested operations both use cancellation. There is no way to cancel "just this block of code" without creating a separate task. There is no way to inspect after the fact whether cancellation actually occurred. Users routinely hit these issues (see linked issues below), and the workarounds are error-prone.

Current workaround (fragile):

# Want to cancel a block of work, but asyncio has no scope for this.
# Must create a wrapper task just to get a cancel handle:
async def do_work():
    await step_one()
    await step_two()  # If step_one is cancelled, step_two never runs — but
                       # if step_one catches CancelledError internally, it might.

task = asyncio.create_task(do_work())
# Later:
task.cancel()  # Injects CancelledError, but the task can swallow it.
# No way to know if cancellation actually took effect.

Proposed API:

async with asyncio.CancelScope() as scope:
    await step_one()
    await step_two()
    # If scope.cancel() is called (from another task, a callback, or a deadline),
    # execution exits the block cleanly.

# After the block:
if scope.cancelled_caught:
    print("Work was cancelled")

# Deadline support (subsumes timeout):
async with asyncio.CancelScope(deadline=loop.time() + 5.0) as scope:
    await long_operation()

# Shielding:
async with asyncio.CancelScope(shield=True):
    await must_not_be_cancelled()

Key properties and methods:

  • deadline — absolute time after which the scope is cancelled (settable)
  • shield — if True, protects the enclosed code from external cancellation
  • cancel() — cancel the scope
  • cancel_calledTrue if cancel() was called or deadline expired
  • cancelled_caughtTrue if the scope actually caught a cancellation

Design note: CancelScope can work with asyncio's existing edge-triggered cancellation model — the quattro library already demonstrates this. This does not require switching asyncio to level-triggered cancellation.

Why this matters: CancelScope is the foundational primitive. asyncio.timeout() and TaskGroup could eventually be reimplemented on top of it, simplifying asyncio's internals and unifying the cancellation model. Even without that refactoring, CancelScope as a standalone API solves real problems today.

Existing CPython issues this would address:

  • #108951 — Add a way to cancel/stop a TaskGroup (69+ comments)
  • #103486 — Safe synchronous cancellation in asyncio (19 comments)
  • #99714 — asyncio: add shield scope context manager (9 comments)
  • #101581 — Optionally prevent child tasks from being cancelled in TaskGroup (11 comments)

1.2 move_on_after() / move_on_at() — Timeout Without Exception

The pain point. asyncio.timeout() raises TimeoutError on expiry. This is correct for "hard" timeouts where you want the error to propagate. But many real-world patterns just want "try for N seconds, then continue with a fallback" — and wrapping every timeout in try/except TimeoutError is verbose, error-prone (what if the inner coroutine raises its own TimeoutError?), and obscures intent.

Current workaround:

try:
    async with asyncio.timeout(5):
        result = await fetch_from_cache()
except TimeoutError:
    # Was this OUR timeout or one from inside fetch_from_cache()?
    result = await fetch_from_database()

Proposed API:

async with asyncio.move_on_after(5) as scope:
    result = await fetch_from_cache()

if scope.cancelled_caught:
    # Unambiguous: OUR deadline expired, not an inner timeout
    result = await fetch_from_database()

This is asyncio.timeout() but without the exception — you check scope.cancelled_caught instead. It is the most commonly requested "missing" timeout API, as discussed on discuss.python.org. It is trivially implemented once CancelScope exists. move_on_at() is the absolute-time variant, paralleling timeout_at().


1.3 TaskGroup.start() with TaskStatus — Task Readiness Signaling

The pain point. When spawning a long-running service task (e.g., a server, a consumer, a background processor), callers often need to wait until the task is ready before proceeding. TaskGroup.create_task() returns immediately. Users resort to manual Event synchronization, which is error-prone and clutters the code.

Current workaround (manual Event):

async def run_server(started: asyncio.Event):
    server = await setup_listener()
    started.set()  # Manual signal
    await server.serve_forever()

async def main():
    started = asyncio.Event()
    async with asyncio.TaskGroup() as tg:
        tg.create_task(run_server(started))
        await started.wait()  # Manual wait
        # Now safe to connect
        await connect_to_server()

Proposed API:

from asyncio import TaskGroup, TaskStatus, TASK_STATUS_IGNORED

async def run_server(
    port: int, *, task_status: TaskStatus[int] = TASK_STATUS_IGNORED
):
    server = await setup_listener(port)
    task_status.started(server.port)  # Signal ready, pass back actual port
    await server.serve_forever()

async def main():
    async with TaskGroup() as tg:
        actual_port = await tg.start(run_server, 0)
        # Guaranteed: server is listening before this line runs
        await connect_to_server(actual_port)

TaskStatus is a simple protocol with one method, started(value=None). TaskGroup.start() is an awaitable that blocks until the spawned task calls task_status.started(). If the task exits without calling started(), a RuntimeError is raised. The TASK_STATUS_IGNORED sentinel allows the same function to work both with tg.start() and tg.create_task(). This pattern has been stable in anyio for 7+ years and is used extensively in production.


Tier 2: High Impact, Needs Design Work

These features address real gaps but require more discussion about the right API shape for asyncio.


2.1 CapacityLimiter — Smarter Concurrency Limiting

The pain point. asyncio.Semaphore works but has a sharp edge: the same task can acquire() twice, deadlocking itself. There is no built-in way to dynamically adjust the limit, and no tracking of which tasks hold the semaphore.

Proposed API sketch:

limiter = asyncio.CapacityLimiter(total_tokens=10)

async with limiter:
    await do_limited_work()

# Dynamic adjustment
limiter.total_tokens = 20

# Introspection
print(limiter.available_tokens, limiter.borrowed_tokens)

Key difference from Semaphore: a CapacityLimiter tracks borrowers and prevents the same task (or on-behalf-of object) from acquiring twice. anyio uses this as the basis for thread pool limiting (current_default_thread_limiter() returns a CapacityLimiter), which provides a clean answer to #136084 (making asyncio.to_thread more efficient and configurable).


2.2 Memory Object Streams — Better Queue

The pain point. asyncio.Queue is unbounded by default (risking memory exhaustion), lacks async iteration, has no concept of "closing" to signal completion (until shutdown() in 3.13, which still lacks iteration support), and provides no way for multiple consumers to independently track when they are done.

Proposed API sketch:

send_stream, receive_stream = asyncio.create_memory_object_stream[str](max_buffer_size=100)

# Producer
async with send_stream:
    for item in items:
        await send_stream.send(item)
# Stream auto-closes on exit, signaling end to consumers

# Consumer
async with receive_stream:
    async for item in receive_stream:
        await process(item)
# Iteration ends cleanly when all senders close

# Multiple consumers: clone the receive end
receive_clone = receive_stream.clone()

The split send/receive design enforces correct ownership, the clone semantics enable fan-out patterns, and bounded-by-default prevents silent memory leaks. This is a superset of what asyncio.Queue can do while being harder to misuse.


2.3 Async File I/O

The pain point. asyncio provides zero support for file I/O. Every asyncio application that reads or writes files must either (a) use asyncio.to_thread() manually for every call, (b) depend on aiofiles, or (c) block the event loop. This is a glaring gap for a standard library async framework. The topic has been discussed on discuss.python.org.

Proposed API sketch:

# Async open
async with asyncio.open_file("data.json", "r") as f:
    contents = await f.read()

# Async Path (mirrors pathlib.Path)
path = asyncio.AsyncPath("/tmp/output.txt")
await path.write_text("hello")
exists = await path.exists()

async for entry in path.parent.iterdir():
    print(entry.name)

Under the hood, this would delegate to to_thread.run_sync(), exactly as anyio and aiofiles do. The value is in providing a standard, batteries-included API so that every project does not have to independently solve this or add a dependency.


Tier 3: Valuable Additions

These would improve asyncio but are lower priority or have smaller user-facing impact.


3.1 Signal Handling — open_signal_receiver()

The pain point. loop.add_signal_handler() replaces any existing handler, provides no way to retrieve or restore the previous handler, and does not compose well. #88378 documents how asyncio overrides signal handlers set by application code.

Proposed API sketch:

async with asyncio.open_signal_receiver(signal.SIGTERM, signal.SIGINT) as signals:
    async for signum in signals:
        if signum == signal.SIGTERM:
            await graceful_shutdown()
            break

This context manager approach composes cleanly, restores previous handlers on exit, and provides an async-iterable interface.


3.2 Thread Integration Improvements

The pain point. asyncio.to_thread() lacks a cancellation flag (the caller cannot indicate that the result is no longer needed), provides no way to limit thread concurrency, and run_coroutine_threadsafe() is awkward for calling async code from sync threads.

Proposed improvements:

# Cancellation awareness
await asyncio.to_thread(blocking_fn, abandon_on_cancel=True)

# Thread capacity limiting (builds on CapacityLimiter)
await asyncio.to_thread(blocking_fn, limiter=my_limiter)

# Calling async from sync thread (cleaner than run_coroutine_threadsafe)
from asyncio import from_thread
result = from_thread.run(some_coroutine, arg1, arg2)

anyio's from_thread.check_cancelled() is also valuable: it lets a worker thread cooperatively check whether its parent async task was cancelled.

Existing CPython issue: #136084asyncio.to_thread could be a lot more efficient


3.3 BlockingPortal — Bridge Sync and Async Worlds

The pain point. Running async code from a synchronous context (e.g., a sync test, a Django view, a CLI tool) requires manually managing an event loop thread. asyncio.run() blocks, and run_coroutine_threadsafe() requires a running loop reference.

Proposed API sketch:

from asyncio import start_blocking_portal

# Spins up an event loop in a background thread
with start_blocking_portal() as portal:
    result = portal.call(some_async_function, arg1)
    future = portal.start_task_soon(long_running_task)

This is less critical given that asyncio.run() exists and asyncio.Runner (3.11) improved the situation, but the portal pattern remains useful for mixed sync/async codebases.


What This Is NOT

To be explicit about scope:

  • NOT a proposal to add anyio as a dependency. Every proposed API would be implemented natively in asyncio, designed for asyncio's architecture. anyio is the inspiration, not the implementation.
  • NOT a proposal to deprecate anything. asyncio.Semaphore, asyncio.Queue, asyncio.timeout(), and all current APIs would remain. New APIs would coexist.
  • NOT a proposal to switch to level-triggered cancellation. That would be a massive backwards-incompatible change. CancelScope can be introduced with edge-triggered semantics and a stateful flag, preserving compatibility.
  • NOT a proposal to replicate anyio's backend abstraction. anyio's multi-backend (asyncio + Trio) design is out of scope. Only the user-facing API patterns matter here.
  • NOT all-or-nothing. Each item is independent. Adopting move_on_after() alone would be a win. Adopting TaskGroup.start() alone would be a win. This is a menu, not a mandate.

Backwards Compatibility

All proposed APIs are additive. No existing APIs are modified or deprecated.

  • CancelScope and move_on_after() introduce new names into the asyncio namespace.
  • TaskGroup.start() adds a new method to an existing class, with no impact on create_task().
  • TaskStatus adds a new protocol to the namespace.
  • Memory object streams add new classes; asyncio.Queue is untouched.
  • Thread improvements add new keyword arguments to to_thread() (with backwards-compatible defaults) and new from_thread utilities.

The only area requiring care is CancelScope integration with TaskGroup and timeout(). If TaskGroup is eventually reimplemented on top of CancelScope, the internal change must preserve the existing external behavior exactly.


Suggested Roadmap

Release Candidates Notes
3.15 move_on_after(), TaskGroup.start() + TaskStatus Smallest scope, highest value, lowest risk
3.16 CancelScope (standalone), CapacityLimiter, thread improvements Foundation for future work; CancelScope informs future TaskGroup/timeout() refactoring
3.17+ Memory object streams, async file I/O, signal handling, BlockingPortal Larger surface area, needs more design iteration

References


Acknowledgments. This proposal owes a debt to Alex Gronholm (anyio), Nathaniel J. Smith (Trio), and the many contributors to the CPython issues linked above. Their work over the past 7+ years has provided the design evidence that makes these proposals possible.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions