OSDN Git Service

tcp: refine rule to allow EPOLLOUT generation under mem pressure
authorEric Dumazet <edumazet@google.com>
Thu, 12 Dec 2019 20:55:31 +0000 (12:55 -0800)
committerJakub Kicinski <jakub.kicinski@netronome.com>
Sat, 14 Dec 2019 05:58:40 +0000 (21:58 -0800)
At the time commit ce5ec440994b ("tcp: ensure epoll edge trigger
wakeup when write queue is empty") was added to the kernel,
we still had a single write queue, combining rtx and write queues.

Once we moved the rtx queue into a separate rb-tree, testing
if sk_write_queue is empty has been suboptimal.

Indeed, if we have packets in the rtx queue, we probably want
to delay the EPOLLOUT generation at the time incoming packets
will free them, making room, but more importantly avoiding
flooding application with EPOLLOUT events.

Solution is to use tcp_rtx_and_write_queues_empty() helper.

Fixes: 75c119afe14f ("tcp: implement rb-tree based retransmit queue")
Signed-off-by: Eric Dumazet <edumazet@google.com>
Cc: Jason Baron <jbaron@akamai.com>
Cc: Neal Cardwell <ncardwell@google.com>
Acked-by: Soheil Hassas Yeganeh <soheil@google.com>
Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com>
net/ipv4/tcp.c

index 8a39ee7..7169383 100644 (file)
@@ -1087,8 +1087,7 @@ do_error:
                goto out;
 out_err:
        /* make sure we wake any epoll edge trigger waiter */
-       if (unlikely(skb_queue_len(&sk->sk_write_queue) == 0 &&
-                    err == -EAGAIN)) {
+       if (unlikely(tcp_rtx_and_write_queues_empty(sk) && err == -EAGAIN)) {
                sk->sk_write_space(sk);
                tcp_chrono_stop(sk, TCP_CHRONO_SNDBUF_LIMITED);
        }
@@ -1419,8 +1418,7 @@ out_err:
        sock_zerocopy_put_abort(uarg, true);
        err = sk_stream_error(sk, flags, err);
        /* make sure we wake any epoll edge trigger waiter */
-       if (unlikely(skb_queue_len(&sk->sk_write_queue) == 0 &&
-                    err == -EAGAIN)) {
+       if (unlikely(tcp_rtx_and_write_queues_empty(sk) && err == -EAGAIN)) {
                sk->sk_write_space(sk);
                tcp_chrono_stop(sk, TCP_CHRONO_SNDBUF_LIMITED);
        }