File bsc#1188212-0001-IPC-server-avoid-temporary-channel-priority-loss-up-.patch of Package libqb.26134
From 97adfa6ba024fc6d484d1d430b71b3268188c50b Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?Jan=20Pokorn=C3=BD?= <jpokorny@redhat.com>
Date: Fri, 24 May 2019 22:18:55 +0200
Subject: [PATCH] IPC: server: avoid temporary channel priority loss, up to
deadlock-worth
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
It turns out that while 7f56f58 allowed for less blocking (thus
throughput increasing) initial handling of connections from clients
within the abstract (out-of-libqb managed) event loop, it unfortunately
subscribes itself back to such polling mechanism for UNIX-socket-check
with a default priority, which can be lower than desired (via explicit
qb_ipcs_request_rate_limit() configuration) for particular channel
(amongst attention-competing siblings in the pool, the term here
refers to associated communication, that is, both server and
on-server abstraction for particular clients). And priority-based
discrepancies are not forgiven in true priority abiding systems
(that is, unlikele with libqb's native event loop harness as detailed
in the previous commit, for which this would be soft-torelated hence
the problem would not be spotted in the first place -- but that's
expliicitly excluded from further discussion).
On top of that, it violates the natural assumption that once (single
threaded, which is imposed by libqb, at least between initial accept()
and after-said-UNIX-socket-check) server accepts the connection, it
shall rather take care of serving it (at least within stated initial
scope of client connection life cycle) rather than be rushing to accept
new ones -- which is exactly what used to happen previously once the
library user set the effectively priority in the abstract poll
above the default one.
It's conceivable, just as with the former case of attention-competing
siblings with higher priority whereby they could _infinitely_ live on
at the expense of starving the client in the initial handling phase
(authentication) despite the library user's as-high-as-siblings
intention (for using the default priority for that unconditionally
instead, which we address here), the dead lock is imminent also in
this latter accept-to-client-authentication-handling case as well
if there's an _unlimited_ fast-paced arrival queue (well, limited
by with number of allowable open descriptors within the system,
but for the Linux built-in maximum of 1M, there may be no practical
difference, at least for time-sensitive applications).
The only hope then is that such dead-locks are rather theoretical,
since a "spontaneous" constant stream of either communication on
unrelated, higher-prio sibling channels, or of new connection arrivals
can as well testify the poor design of the libqb's IPC application.
That being said, unconditional default priority in the isolated
context of initial server-side client authentication is clearly
a bug, but such application shall apply appropriate rate-limiting
measures (exactly on priority basis) to handle unexpected flux
nonetheless.
The fix makes test_ipc_dispatch_*_glib_prio_deadlock_provoke tests pass.
Signed-off-by: Jan Pokorný <jpokorny@redhat.com>
---
lib/ipc_setup.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/lib/ipc_setup.c b/lib/ipc_setup.c
index 43dc3e7..f4e9bbf 100644
--- a/lib/ipc_setup.c
+++ b/lib/ipc_setup.c
@@ -825,10 +825,10 @@ qb_ipcs_uc_recv_and_auth(int32_t sock, struct qb_ipcs_service *s)
setsockopt(sock, SOL_SOCKET, SO_PASSCRED, &on, sizeof(on));
#endif
- res = s->poll_fns.dispatch_add(QB_LOOP_MED,
- data->sock,
- POLLIN | POLLPRI | POLLNVAL,
- data, process_auth);
+ res = s->poll_fns.dispatch_add(s->poll_priority,
+ data->sock,
+ POLLIN | POLLPRI | POLLNVAL,
+ data, process_auth);
if (res < 0) {
qb_util_log(LOG_DEBUG, "Failed to process AUTH for fd (%d)", data->sock);
close(sock);
--
2.31.1