-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RPCs are executed one at a time, blocking on a mutex #174
Comments
The underlying Java client uses blocking IO by default. You can switch to NIO with The Java client still has to use some locking to serialize RPC calls like You're welcome to investigate further with NIO enabled to see where things could be made more optimized. The one-RPC-call-at-a-time limitation will always be there but there may be ways to mitigate the issue. |
Fair. Modifying the code exemple to:
Turns into the expected stacktrace:
The documentation do not mention nio will prevent some blocking calls and merely states:
Which, running a single connection do not look appealing to me! I suggest updating the documentation and clerly mention the costs of not using NIO regarding blocking call...
Yes, I propose above refining the channel pool API. Using another dedicated channel pool upon bind, unbind, where return of the pools to the channels are piggy backed on RPC completion could work too. By the way, at the very least this limitation needs to be documented. |
As demonstrated in reactor/reactor-rabbitmq#174 (comment) use of NIO can prevent some blocking calls.
Summary
apache/james-project#1003
Calling
Sender::unbind
does wait a mutex release if a RPC is currently executed on the current channel, and waits this RPC to complete before submitting a new one.This leads to a blocking behaviour upon "massive" unbinds.
Expected Behavior
I would expect to not get such blocking behaviours in a
reactor-*
project and would expect a reactive driver not to force me to putsubscribeOn(boundedElastic())
everywhere.At the very least we could have a separate method in ChannelPool to get a channel not executing RPCs thus enabling smart implementation to get channels always in a state to directly submit RPCs, open new channels if needed or waits without blocking threads if relevant.
Actual Behavior
See this flame graph taken on the Netty event loop:
Context: a CTRL+C in a perf test lead to 10.000 IMAP connections being closed at the same time, cleaning up
Steps to Reproduce
Fails with
Your Environment
The text was updated successfully, but these errors were encountered: