Skip to content

Conversation

@dulinriley
Copy link
Contributor

Summary:
Part of: #2027

Before this change, the actor_states_monitor tokio task was only running on PyActorMesh, and
only monitored PythonActors. Plus it only forwarded the SupervisionFailureMessage to other
PythonActors. This means that rust-based actors have none of this error propagation working.

This change moves the monitor task into the ActorMeshController, which will allow every actor,
regardless of type, to monitor the mesh it owns for failures. Each Ref will subscribe to messages
from the controller to alert it to changes in state. This will also scale better, as instead of each
Mesh and Ref object having a monitor, there is only one per mesh globally. So there should be
fewer messages going around.
This also gives more implementation flexibility because the details are inside the controller,
and users only need the Subscribe message.
The subscribers are guaranteed to also get a None message, which means that the owner
is still alive, and the actors have not changed their state. This can be used to detect cases
where the controller is unreachable.

To do the forwarding, we require the spawning context of ProcMesh::spawn to
have an impl Handler.
This message represents when an actor on a mesh your actor (or client) owns fails.
PythonActor uses this message to call its __supervise__ callback, and rust actors have
no default implementation. The TestRootClient and GlobalRootClient both get implementations
that just panic. In the future we may want something like unhandled_fault_hook for Rust
actors.

The new method is next_supervision_event, a future that resolves if an event occurs. This
can be awaited simultaneously via tokio::select with a reply from a casted message to re-create
the Python behavior.

Differential Revision: D87491033

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Meta Open Source bot. label Dec 16, 2025
@meta-codesync
Copy link

meta-codesync bot commented Dec 16, 2025

@dulinriley has exported this pull request. If you are a Meta employee, you can view the originating Diff in D87491033.

dulinriley added a commit to dulinriley/monarch that referenced this pull request Dec 16, 2025
Summary:

Part of: meta-pytorch#2027

Before this change, the actor_states_monitor tokio task was only running on PyActorMesh, and
only monitored PythonActors. Plus it only forwarded the SupervisionFailureMessage to other
PythonActors. This means that rust-based actors have none of this error propagation working.

This change moves the monitor task into the ActorMeshController, which will allow every actor,
regardless of type, to monitor the mesh it owns for failures. Each Ref will subscribe to messages
from the controller to alert it to changes in state. This will also scale better, as instead of each
Mesh and Ref object having a monitor, there is only one per mesh globally. So there should be
fewer messages going around.
This also gives more implementation flexibility because the details are inside the controller,
and users only need the `Subscribe` message.
The subscribers are guaranteed to also get a None message, which means that the owner
is still alive, and the actors have not changed their state. This can be used to detect cases
where the controller is unreachable.

To do the forwarding, we require the spawning context of ProcMesh::spawn to
have an impl Handler<SupervisionFailureMessage>.
This message represents when an actor on a mesh your actor (or client) owns fails.
PythonActor uses this message to call its `__supervise__` callback, and rust actors have
no default implementation. The TestRootClient and GlobalRootClient both get implementations
that just panic. In the future we may want something like `unhandled_fault_hook` for Rust
actors.

The new method is `next_supervision_event`, a future that resolves if an event occurs. This
can be awaited simultaneously via tokio::select with a reply from a casted message to re-create
the Python behavior.

Differential Revision: D87491033
dulinriley added a commit to dulinriley/monarch that referenced this pull request Dec 16, 2025
Summary:

Part of: meta-pytorch#2027

Before this change, the actor_states_monitor tokio task was only running on PyActorMesh, and
only monitored PythonActors. Plus it only forwarded the SupervisionFailureMessage to other
PythonActors. This means that rust-based actors have none of this error propagation working.

This change moves the monitor task into the ActorMeshController, which will allow every actor,
regardless of type, to monitor the mesh it owns for failures. Each Ref will subscribe to messages
from the controller to alert it to changes in state. This will also scale better, as instead of each
Mesh and Ref object having a monitor, there is only one per mesh globally. So there should be
fewer messages going around.
This also gives more implementation flexibility because the details are inside the controller,
and users only need the `Subscribe` message.
The subscribers are guaranteed to also get a None message, which means that the owner
is still alive, and the actors have not changed their state. This can be used to detect cases
where the controller is unreachable.

To do the forwarding, we require the spawning context of ProcMesh::spawn to
have an impl Handler<SupervisionFailureMessage>.
This message represents when an actor on a mesh your actor (or client) owns fails.
PythonActor uses this message to call its `__supervise__` callback, and rust actors have
no default implementation. The TestRootClient and GlobalRootClient both get implementations
that just panic. In the future we may want something like `unhandled_fault_hook` for Rust
actors.

The new method is `next_supervision_event`, a future that resolves if an event occurs. This
can be awaited simultaneously via tokio::select with a reply from a casted message to re-create
the Python behavior.

Differential Revision: D87491033
@dulinriley dulinriley force-pushed the export-D87491033 branch 2 times, most recently from 0d754d1 to 6056818 Compare December 17, 2025 00:07
dulinriley added a commit to dulinriley/monarch that referenced this pull request Dec 17, 2025
Summary:

Part of: meta-pytorch#2027

Before this change, the actor_states_monitor tokio task was only running on PyActorMesh, and
only monitored PythonActors. Plus it only forwarded the SupervisionFailureMessage to other
PythonActors. This means that rust-based actors have none of this error propagation working.

This change moves the monitor task into the ActorMeshController, which will allow every actor,
regardless of type, to monitor the mesh it owns for failures. Each Ref will subscribe to messages
from the controller to alert it to changes in state. This will also scale better, as instead of each
Mesh and Ref object having a monitor, there is only one per mesh globally. So there should be
fewer messages going around.
This also gives more implementation flexibility because the details are inside the controller,
and users only need the `Subscribe` message.
The subscribers are guaranteed to also get a None message, which means that the owner
is still alive, and the actors have not changed their state. This can be used to detect cases
where the controller is unreachable.

To do the forwarding, we require the spawning context of ProcMesh::spawn to
have an impl Handler<SupervisionFailureMessage>.
This message represents when an actor on a mesh your actor (or client) owns fails.
PythonActor uses this message to call its `__supervise__` callback, and rust actors have
no default implementation. The TestRootClient and GlobalRootClient both get implementations
that just panic. In the future we may want something like `unhandled_fault_hook` for Rust
actors.

The new method is `next_supervision_event`, a future that resolves if an event occurs. This
can be awaited simultaneously via tokio::select with a reply from a casted message to re-create
the Python behavior.

Differential Revision: D87491033
Summary:

Part of: meta-pytorch#2027

Before this change, the actor_states_monitor tokio task was only running on PyActorMesh, and
only monitored PythonActors. Plus it only forwarded the SupervisionFailureMessage to other
PythonActors. This means that rust-based actors have none of this error propagation working.

This change moves the monitor task into the ActorMeshController, which will allow every actor,
regardless of type, to monitor the mesh it owns for failures. Each Ref will subscribe to messages
from the controller to alert it to changes in state. This will also scale better, as instead of each
Mesh and Ref object having a monitor, there is only one per mesh globally. So there should be
fewer messages going around. The tokio task became an actor self-message, which has
better tracking of failures and guarantees, plus no need for async locks on health state.
This also gives more implementation flexibility because the details are inside the controller,
and users only need the `Subscribe` message.
The subscribers are guaranteed to also get a None message, which means that the owner
is still alive, and the actors have not changed their state. This can be used to detect cases
where the controller is unreachable.

To do the forwarding, we require the spawning context of ProcMesh::spawn to
have an impl Handler<SupervisionFailureMessage>.
This message represents when an actor on a mesh your actor (or client) owns fails.
PythonActor uses this message to call its `__supervise__` callback, and rust actors have
no default implementation. The TestRootClient and GlobalRootClient both get implementations
that just panic. In the future we may want something like `unhandled_fault_hook` for Rust
actors.

The new method is `next_supervision_event`, a future that resolves if an event occurs. This
can be awaited simultaneously via tokio::select with a reply from a casted message to re-create
the Python behavior.

Differential Revision: D87491033
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Meta Open Source bot. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant