You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I've got a Locust setup that runs on ECS (a service for the master host and a service for the workers, each running on its own ECS task). Something I've noticed is that if one of the worker tasks gets killed or is replaced (by a deployment for example) the master service does not see this change. Instead, the worker count is incremented and the the master seems to think the previous workers are still reachable.
I didn't want to post this as a bug as it could be due to how we have the services configured, but wanted to see if other folks have seen this behavior and if there's anything I need to do in the workers themselves to ensure they shutdown properly (listening to the correct signals and reporting that signal being fired for example). Something else I've noticed while running this configuration is an issue with stopping the tests sometimes. I haven't quite pinned when this happens (seems to happen when more workers + tasks are running), but the issue prevents the test from stopping and the test gets stuck in a Loading state. I've seen similar issues in the repo that were closed / fixed, but it does seem to still occur with this setup. Curious if anyone may have also have any insight into why this might be occurring at scale.
For some added context, we're using ECS's service discovery for communication between the two services.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello all 👋,
I've got a Locust setup that runs on ECS (a service for the master host and a service for the workers, each running on its own ECS task). Something I've noticed is that if one of the worker tasks gets killed or is replaced (by a deployment for example) the master service does not see this change. Instead, the worker count is incremented and the the master seems to think the previous workers are still reachable.
I didn't want to post this as a bug as it could be due to how we have the services configured, but wanted to see if other folks have seen this behavior and if there's anything I need to do in the workers themselves to ensure they shutdown properly (listening to the correct signals and reporting that signal being fired for example). Something else I've noticed while running this configuration is an issue with stopping the tests sometimes. I haven't quite pinned when this happens (seems to happen when more workers + tasks are running), but the issue prevents the test from stopping and the test gets stuck in a
Loading
state. I've seen similar issues in the repo that were closed / fixed, but it does seem to still occur with this setup. Curious if anyone may have also have any insight into why this might be occurring at scale.For some added context, we're using ECS's service discovery for communication between the two services.
Thanks!
Nick.
Beta Was this translation helpful? Give feedback.
All reactions