-
-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Asynchronous shutdown of workers #172
Comments
Well… the idea for these pools of processes is to have a static number of workers and Can you clarify why |
Any strategy except simple_one_for_one shuts down children synchronously, so the supervisor waits for the DOWN message from each child before shutting down the next (reverse order of how they are started). For simple_one_for_one, all children are shut down "at the same time", and then the supervisor waits for all DOWNs. From supervisor documentation: "As a simple_one_for_one supervisor can have many children, it shuts them all down asynchronously. This means that the children do their cleanup in parallel, and therefore the order in which they are stopped is not defined." https://erlang.org/doc/man/supervisor.html |
WOW! I didn't know that… Two seconds of thinking lead to this terrible terrible hack… rpc:pmap(
{gen_server, stop},
[],
[P || {_, P, _, _} <- supervisor:which_children(YourWpoolSup)]
). I'm pretty sure that this is not a good idea, either because of the restarts (workers are probably not I'll keep thinking about this… |
Yeah, given that the workers are |
This is a slightly related to PR #171 . I wonder if you have considered making it possible to set the supervisor strategy for
wpool_process_sup
tosimple_one_for_one
. I mean, it is possible to set this value with thestrategy
option, but as far as I can understand it will only work for other strategies thatsimple_one_for_one
.The reason I would like this is mainly to obtain asynchronous shutdown of the workers in order to reduce shutdown time, so other ways of achieving this would also be interesting.
Any thoughts?
The text was updated successfully, but these errors were encountered: