From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 12A62A051C for ; Tue, 11 Feb 2020 12:42:29 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D36D31C0AD; Tue, 11 Feb 2020 12:42:28 +0100 (CET) Received: from mail-wr1-f66.google.com (mail-wr1-f66.google.com [209.85.221.66]) by dpdk.org (Postfix) with ESMTP id 43A971C012 for ; Tue, 11 Feb 2020 12:42:26 +0100 (CET) Received: by mail-wr1-f66.google.com with SMTP id g3so10870043wrs.12 for ; Tue, 11 Feb 2020 03:42:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=aKqEZv9DLG8LtHtLhesRa42+cLudl+IGgMSOTwWAZHc=; b=Vf2/D//cMofv3I+Me5w9U1U1KQD89nBozEPlm8quskZ0KywBuAHkifLZhcL8YUx8YL 8PDxPFXInWRXea86oUgI37coQZeb2uu/Gk4MfkkZ7KM1h34PHWzXRB0St1fiCbXLcw7r ccWJSAeYYTbsPufns9H/LycNS647kxxF/Ao4CcRDOdahHnHUq5TzZSQ83/LjLMRgorhQ vc5rC3H5Oys6CCmXrxjmvownnUkBHrlQExrjMSphr1p5P56UA1lek0HErSu5sxFrOQdg E5d6Ydy82zrBF4GOi+DkHISK1TBTMOC+O+lTgucxIX5uYX9t5Y9ZedkJex6niplIyLGn INxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=aKqEZv9DLG8LtHtLhesRa42+cLudl+IGgMSOTwWAZHc=; b=NADmA+N2vBwj1MBK6XEkh3baO/IXzHesXNKCXJ/nT0qilpn9qEFLMEK4yqEc/qsDbh iN5qNT1lxbJd0/xpCFWDEKC6WPZQzypHgG1g4Mim4usXcFVRaaFsyur7EcO3DKN5zaT2 WFdcuTFxmQXA5CsYoDTCqInKlVaxf4O4GENUvFFkfuiKw0cWqVzh6IGPmQYpW7de4CA5 uQUhk3N0YXbVfFp8fJm6f2/ILQqpmlmZP1f+/MklqtKQ1PkM9LByW873q0iTqPMxU55Y Xvz2DcCcg2lgQ4riTCS9+ZnoeDA8wryYhN5Qreg23NjoZkeIz25pkWKMU6JYVQips7ng SE3A== X-Gm-Message-State: APjAAAWpGYCIf9w7ymKhHfkHutvDfuyHAkfkrk6FvEv+lIS3KQFES345 tTw7008EOALgKUQemAxYS8IkXvQ3GWg= X-Google-Smtp-Source: APXvYqzOn42DFTeeIPlZ5VAxs+8CBmYrOsNiRuiqVm4myXO7LwgWxYH/D6F+u0kHVv/mKq/0P3iJxg== X-Received: by 2002:a05:6000:1284:: with SMTP id f4mr7864292wrx.379.1581421345976; Tue, 11 Feb 2020 03:42:25 -0800 (PST) Received: from localhost ([88.98.246.218]) by smtp.gmail.com with ESMTPSA id y20sm3284884wmi.25.2020.02.11.03.42.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 11 Feb 2020 03:42:25 -0800 (PST) From: luca.boccassi@gmail.com To: =?UTF-8?q?Eugenio=20P=C3=A9rez?= Cc: Maxime Coquelin , dpdk stable Date: Tue, 11 Feb 2020 11:22:04 +0000 Message-Id: <20200211112216.3929-178-luca.boccassi@gmail.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200211112216.3929-1-luca.boccassi@gmail.com> References: <20200211112216.3929-1-luca.boccassi@gmail.com> MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Subject: [dpdk-stable] patch 'vhost: flush shadow Tx if no more packets' has been queued to stable release 19.11.1 X-BeenThere: stable@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches for DPDK stable branches List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: stable-bounces@dpdk.org Sender: "stable" Hi, FYI, your patch has been queued to stable release 19.11.1 Note it hasn't been pushed to http://dpdk.org/browse/dpdk-stable yet. It will be pushed if I get no objections before 02/13/20. So please shout if anyone has objections. Also note that after the patch there's a diff of the upstream commit vs the patch applied to the branch. This will indicate if there was any rebasing needed to apply to the stable branch. If there were code changes for rebasing (ie: not only metadata diffs), please double check that the rebase was correctly done. Thanks. Luca Boccassi --- >From 474437bad30f58321ab8723f175ad5bc862afb8d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= Date: Wed, 29 Jan 2020 20:33:10 +0100 Subject: [PATCH] vhost: flush shadow Tx if no more packets MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit [ upstream commit cdf1dc5e6a361df17d081e3e975cc586a4b7d68d ] The current implementation of vhost_net in packed vring tries to fill the shadow vector before send any actual changes to the guest. While this can be beneficial for the throughput, it conflicts with some bufferfloats methods like the linux kernel napi, that stops transmitting packets if there are too much bytes/buffers in the driver. To solve it, we flush the shadow packets at the end of virtio_dev_tx_packed if we have starved the vring, i.e. the next buffer is not available for the device. Since this last check can be expensive because of the atomic, we only check it if we have not obtained the expected "count" packets. If it happens to obtain "count" packets and there is no more available packets the caller needs to keep call virtio_dev_tx_packed again. Fixes: 31d6c6a5b820 ("vhost: optimize packed ring dequeue") Signed-off-by: Eugenio Pérez Reviewed-by: Maxime Coquelin --- lib/librte_vhost/virtio_net.c | 27 ++++++++++++++++++++++++++- 1 file changed, 26 insertions(+), 1 deletion(-) diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 21c311732a..ac2842b2d2 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -2133,6 +2133,20 @@ virtio_dev_tx_packed_zmbuf(struct virtio_net *dev, return pkt_idx; } +static __rte_always_inline bool +next_desc_is_avail(const struct vhost_virtqueue *vq) +{ + bool wrap_counter = vq->avail_wrap_counter; + uint16_t next_used_idx = vq->last_used_idx + 1; + + if (next_used_idx >= vq->size) { + next_used_idx -= vq->size; + wrap_counter ^= 1; + } + + return desc_is_avail(&vq->desc_packed[next_used_idx], wrap_counter); +} + static __rte_noinline uint16_t virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, @@ -2165,9 +2179,20 @@ virtio_dev_tx_packed(struct virtio_net *dev, } while (remained); - if (vq->shadow_used_idx) + if (vq->shadow_used_idx) { do_data_copy_dequeue(vq); + if (remained && !next_desc_is_avail(vq)) { + /* + * The guest may be waiting to TX some buffers to + * enqueue more to avoid bufferfloat, so we try to + * reduce latency here. + */ + vhost_flush_dequeue_shadow_packed(dev, vq); + vhost_vring_call_packed(dev, vq); + } + } + return pkt_idx; } -- 2.20.1 --- Diff of the applied patch vs upstream commit (please double-check if non-empty: --- --- - 2020-02-11 11:17:44.742544050 +0000 +++ 0178-vhost-flush-shadow-Tx-if-no-more-packets.patch 2020-02-11 11:17:38.824009275 +0000 @@ -1,4 +1,4 @@ -From cdf1dc5e6a361df17d081e3e975cc586a4b7d68d Mon Sep 17 00:00:00 2001 +From 474437bad30f58321ab8723f175ad5bc862afb8d Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Eugenio=20P=C3=A9rez?= Date: Wed, 29 Jan 2020 20:33:10 +0100 Subject: [PATCH] vhost: flush shadow Tx if no more packets @@ -6,6 +6,8 @@ Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit +[ upstream commit cdf1dc5e6a361df17d081e3e975cc586a4b7d68d ] + The current implementation of vhost_net in packed vring tries to fill the shadow vector before send any actual changes to the guest. While this can be beneficial for the throughput, it conflicts with some @@ -23,7 +25,6 @@ packets the caller needs to keep call virtio_dev_tx_packed again. Fixes: 31d6c6a5b820 ("vhost: optimize packed ring dequeue") -Cc: stable@dpdk.org Signed-off-by: Eugenio Pérez Reviewed-by: Maxime Coquelin @@ -32,7 +33,7 @@ 1 file changed, 26 insertions(+), 1 deletion(-) diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c -index 73bf98bd93..37c47c7dc0 100644 +index 21c311732a..ac2842b2d2 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -2133,6 +2133,20 @@ virtio_dev_tx_packed_zmbuf(struct virtio_net *dev,