Hi, @ruslandoga!
Thanks a lot for the tip! At first glance, with the help of tags, we can refuse to store monitoring links to the “types of process” we monitor (worker or waiting caller). I’ll have to check it out.
Hi, @ruslandoga!
Thanks a lot for the tip! At first glance, with the help of tags, we can refuse to store monitoring links to the “types of process” we monitor (worker or waiting caller). I’ll have to check it out.
Oh, and I also found some more tiny bugs.
Consider this situation: we have a pool, all workers are busy and there is a queue of waiting callers. Then pool receives a call to create more idle workers, but these workers won’t be returned to waiting callers.
And there’s also this nasty bug which is very rare in normal conditions but quite possible in extreme load scenario. Caller sits in get_idle_worker, then timeout expires, caller sends the cancel_waiting and before Poolex receives the cancel message, some worker becomes available and gets sent to this waiting caller. Then Poolex will handle the cancel_waiting message and we’ll have a worker which is considered busy, but caller may never use it and this worker will be busy until caller dies
Several small improvements have been made that were mentioned above. Thank you very much for your comments!
Poolex.get_state/1 deprecated in favor of :sys.get_state/1.Also added adobe/elixir-styler to CI and made a few minor refactoring changes.
And a few more improvements!
These are mainly related to library maintenance improvements:
However, there has also been a slight improvement in the library’s use. Now, during initialization, there is no need to explicitly specify pool_id since, judging by user feedback, it often coincides with worker_module. Therefore, you can not set pool_id; by default, it will have a value equal to worker_module.
Example:
Poolex.start_link(worker_module: MyLovelyWorker, workers_count: 10)
Poolex.run(MyLovelyWorker, fn pid -> do_something(pid) end)
Fixed some small bugs:
busy_worker will be restarted after the caller's death. Read more in issue.add_idle_workers!/2 now provides new workers to waiting callers if they exist. Read more in issue.Attention
I dragged these bugs ahead to stabilize the library’s work. However, in the next release, I plan several breaking changes:
get_debug_info/1 (issue).If you need to force some updates before the bump of the minimum Elixir version, write, and we will discuss it.
I decided to solve this issue before making any breaking changes.
So I’ve published a new version of the Poolex with fixes for the problem mentioned above.
Now the pool will not crash when some workers fail for any reason. Poolex will start with healthy workers and will try to restart failed workers.
You can use a new option failed_workers_retry_interval on pool initialization to configure the interval between retry start attempts.
Some other little improvements were also made; see the changelog for more info.
Some of you have been asking for this for a long time…
Finally, I’ve added the worker_shutdown_delay option to pools ![]()
Now you can specify how long the overflowed worker will wait for other callers before it shuts down.
For more information about this feature, please refer to here.
Additionally, I’ve added validation to pool parameters and performed some code refactoring to improve clarity.
You can check all changes in release notes.