-
Notifications
You must be signed in to change notification settings - Fork 204
implement server agent draining #795
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
implement server agent draining #795
Conversation
Signed-off-by: Imran Pochi <imranpochi@microsoft.com>
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: ipochi The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
I'm happy to see interest in completing this feature! When I had been looking into this, here's the questions I ran into:
That potential agent change is probably an improvement but needs careful risk analysis.
|
Thank you for the feedback @jkh52 Your point on 1, hadn't crossed my mind. It seems logical thing to do to stop the agent sync loop if the agent is draining. I'll give more thought on this whether the server should do more so that it doesn't get into a conflicted state or the agent should stop teh sync loops. On Point 2, I think its valid point that continue to use the draining agent if there are no non-draining agents. Its akin falling back on current behaviour. |
This commit adds a fallback in the case where all the agents in system are draining. Rather than drop the request with error, we fallback to the existing behavior i.e continue to the send the request to the agent even if its draining. As for the agent side issue, if the agent has sent the DRAIN signal to the server, ideally it should stop doing the syncOnce with the server. This mistakes the server the agent is back ready. Signed-off-by: Imran Pochi <imranpochi@microsoft.com>
|
Current PR limitations:
Expanding on 1 (agent side implementation) simplest option is to stop the agent from doing connectOnce if the agent is draining. I don't want to complicate the logic by making the agents do anything fancy or complex like sending DIAL_CLS or something to stop accepting new connections coz the agent doesn't have a whole view of the system (may its a one agent setup or all agents are draining) in which case server retry and agent doing something fancy could lead to deadlock (back and forth) Expanding on the fall back (when all or the only agent is in draining state), I think the simplest option in this case would again be pick the first draining agent you found and just send the request to that agent, even it it means the agent is draining. These limitations are now addressed in the second commit of the PR. Please review and let me know your thoughts. |
cheftako
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The changes look pretty good and follow the design goals, with the fallback mechanism in backend selection serving as a good safeguard. However, the complexity of GetRandomBackend has increased from O(1) to O(N), which is worth noting for performance at scale. Please verify the fallback behavior in DestHostBackendManager to ensure it doesn't create hotspots. Most importantly, please add unit tests for GetRandomBackend (and DestHostBackendManager.Backend) to verify that it prefers non-draining backends and correctly falls back to a draining backend if that is the only option.
|
|
||
| var firstDrainingBackend *Backend | ||
|
|
||
| // Start at a random agent and check each agent in sequence |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The complexity here changes from O(1) to O(N). While likely acceptable given the typical number of agents, please ensure this doesn't introduce latency spikes in large clusters.
| } | ||
|
|
||
| // Keep track of first draining backend as fallback | ||
| if firstDrainingBackend == nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The fallback logic is critical here. It would be great to see a unit test in pkg/server/backend_manager_test.go ensuring that:
- Non-draining backends are prioritized.
- Draining backends are returned if no others are available (fallback).
|
|
||
| // Find a non-draining backend for this destination host | ||
| for _, backend := range bes { | ||
| if !backend.IsDraining() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same as DefaultBackendStorage, adding a test case for this logic in DestHostBackendManager would be beneficial to ensure the preference and fallback behavior work correctly.
|
|
||
| // draining indicates if this backend is draining and should not accept new connections | ||
| draining bool | ||
| // mu protects draining field |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider using sync/atomic (e.g., atomic.Bool or int32) for the draining field. This would allow IsDraining() to be lock-free, reducing the overhead inside the O(N) loop in GetRandomBackend, which runs under the global s.mu lock.
| var firstDrainingBackend *Backend | ||
|
|
||
| // Start at a random agent and check each agent in sequence | ||
| startIdx := s.random.Intn(len(s.agentIDs)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This loop changes the complexity of GetRandomBackend from O(1) to O(N) while holding the DefaultBackendStorage exclusive lock (s.mu). In clusters with thousands of agents, this extends the critical section and could impact throughput. If performance becomes an issue, consider maintaining a separate list/set of non-draining agents.
| } | ||
|
|
||
| func (cs *ClientSet) connectOnce() error { | ||
| // Skip establishing new connections if draining |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ensure that cs.drainCh is properly initialized in NewClientSet (or wherever ClientSet is created). If it is nil, this case will block (if it were a lone case) or be skipped (in a select with default), effectively disabling the check.
I believe its setup by SetupSignalHandler but given the distance between it would be good to be sure.
|
|
||
| // All agents are draining, use one as fallback | ||
| if firstDrainingBackend != nil { | ||
| agentID := firstDrainingBackend.id |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Inconsistency in log levels: GetRandomBackend logs the fallback event at V(2), whereas DestHostBackendManager logs a similar fallback event at V(4).
https://github.com/kubernetes/community/blob/master/contributors/devel/sig-instrumentation/logging.md suggests that V2 is for significant changes in state. If we detected that now we had run out of non-draining backends I would agree V2. As it is I would go more for V3.
| } | ||
|
|
||
| // SetDraining marks the backend as draining | ||
| func (b *Backend) SetDraining() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Verify lock ordering: GetRandomBackend holds s.mu (Lock) and calls IsDraining (which takes b.mu RLock). SetDraining takes b.mu Lock. Ensure no code path attempts to acquire s.mu while holding b.mu to avoid potential deadlocks.
Currently, the server only logs if it receives a drain request from the agent. Ideally, in that scenario, the server should mark it and not select it for newer requests. This PR implements that.