From: "Xueming(Steven) Li" <xuemingl@nvidia.com>
To: "Xueming(Steven) Li" <xuemingl@nvidia.com>,
"谢华伟(此时此刻)" <huawei.xhw@alibaba-inc.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
"huawei.xie@intel.com" <huawei.xie@intel.com>,
"jerin.jacob@caviumnetworks.com" <jerin.jacob@caviumnetworks.com>,
"drc@linux.vnet.ibm.com" <drc@linux.vnet.ibm.com>,
"stable@dpdk.org" <stable@dpdk.org>,
Maxime Coquelin <maxime.coquelin@redhat.com>,
Chenbo Xia <chenbo.xia@intel.com>,
Jerin Jacob <jerinj@marvell.com>,
Ruifeng Wang <ruifeng.wang@arm.com>,
Bruce Richardson <bruce.richardson@intel.com>,
Konstantin Ananyev <konstantin.ananyev@intel.com>,
Jianfeng Tan <jianfeng.tan@intel.com>,
Jianbo Liu <jianbo.liu@linaro.org>,
Yuanhan Liu <yuanhan.liu@linux.intel.com>
Subject: Re: [dpdk-stable] [PATCH] net/virtio: fix vectorized Rx queue stuck
Date: Wed, 14 Apr 2021 06:11:05 +0000 [thread overview]
Message-ID: <BY5PR12MB4324C78DC7FB99B560E01251A14E9@BY5PR12MB4324.namprd12.prod.outlook.com> (raw)
In-Reply-To: <20210414042631.7041-1-xuemingl@nvidia.com>
+@谢华伟(此时此刻) <huawei.xhw@alibaba-inc.com>
> -----Original Message-----
> From: Xueming Li <xuemingl@nvidia.com>
> Sent: Wednesday, April 14, 2021 12:27 PM
> Cc: dev@dpdk.org; Xueming(Steven) Li <xuemingl@nvidia.com>; huawei.xie@intel.com; jerin.jacob@caviumnetworks.com;
> drc@linux.vnet.ibm.com; stable@dpdk.org; Maxime Coquelin <maxime.coquelin@redhat.com>; Chenbo Xia <chenbo.xia@intel.com>;
> Jerin Jacob <jerinj@marvell.com>; Ruifeng Wang <ruifeng.wang@arm.com>; Bruce Richardson <bruce.richardson@intel.com>;
> Konstantin Ananyev <konstantin.ananyev@intel.com>; Jianfeng Tan <jianfeng.tan@intel.com>; Jianbo Liu <jianbo.liu@linaro.org>;
> Yuanhan Liu <yuanhan.liu@linux.intel.com>
> Subject: [PATCH] net/virtio: fix vectorized Rx queue stuck
>
> When Rx burst size >= Rx queue size, all descriptors in used queue consumed without rearm, the next Rx burst found no new packets
> and returned directly without rearm as well.
>
> This patch rearms available queue at once after rx_burst to avoid vq hungry.
>
> Fixes: fc3d66212fed ("virtio: add vector Rx")
> Cc: huawei.xie@intel.com
> Fixes: 2d7c37194ee4 ("net/virtio: add NEON based Rx handler")
> Cc: jerin.jacob@caviumnetworks.com
> Fixes: 52b5a707e6ca ("net/virtio: add Altivec Rx")
> Cc: drc@linux.vnet.ibm.com
> Cc: stable@dpdk.org
>
> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> ---
> drivers/net/virtio/virtio_rxtx_simple_altivec.c | 12 ++++++------
> drivers/net/virtio/virtio_rxtx_simple_neon.c | 12 ++++++------
> drivers/net/virtio/virtio_rxtx_simple_sse.c | 12 ++++++------
> 3 files changed, 18 insertions(+), 18 deletions(-)
>
> diff --git a/drivers/net/virtio/virtio_rxtx_simple_altivec.c b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
> index 62e5100a48..1ffae234da 100644
> --- a/drivers/net/virtio/virtio_rxtx_simple_altivec.c
> +++ b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
> @@ -102,12 +102,6 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
>
> rte_prefetch0(rused);
>
> - if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> - virtio_rxq_rearm_vec(rxvq);
> - if (unlikely(virtqueue_kick_prepare(vq)))
> - virtqueue_notify(vq);
> - }
> -
> nb_total = nb_used;
> ref_rx_pkts = rx_pkts;
> for (nb_pkts_received = 0;
> @@ -204,5 +198,11 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
> for (nb_used = 0; nb_used < nb_pkts_received; nb_used++)
> virtio_update_packet_stats(&rxvq->stats, ref_rx_pkts[nb_used]);
>
> + if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> + virtio_rxq_rearm_vec(rxvq);
> + if (unlikely(virtqueue_kick_prepare(vq)))
> + virtqueue_notify(vq);
> + }
> +
> return nb_pkts_received;
> }
> diff --git a/drivers/net/virtio/virtio_rxtx_simple_neon.c b/drivers/net/virtio/virtio_rxtx_simple_neon.c
> index c8e4b13a02..341dedce41 100644
> --- a/drivers/net/virtio/virtio_rxtx_simple_neon.c
> +++ b/drivers/net/virtio/virtio_rxtx_simple_neon.c
> @@ -100,12 +100,6 @@ virtio_recv_pkts_vec(void *rx_queue,
>
> rte_prefetch_non_temporal(rused);
>
> - if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> - virtio_rxq_rearm_vec(rxvq);
> - if (unlikely(virtqueue_kick_prepare(vq)))
> - virtqueue_notify(vq);
> - }
> -
> nb_total = nb_used;
> ref_rx_pkts = rx_pkts;
> for (nb_pkts_received = 0;
> @@ -210,5 +204,11 @@ virtio_recv_pkts_vec(void *rx_queue,
> for (nb_used = 0; nb_used < nb_pkts_received; nb_used++)
> virtio_update_packet_stats(&rxvq->stats, ref_rx_pkts[nb_used]);
>
> + if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> + virtio_rxq_rearm_vec(rxvq);
> + if (unlikely(virtqueue_kick_prepare(vq)))
> + virtqueue_notify(vq);
> + }
> +
> return nb_pkts_received;
> }
> diff --git a/drivers/net/virtio/virtio_rxtx_simple_sse.c b/drivers/net/virtio/virtio_rxtx_simple_sse.c
> index ff4eba33d6..2e17f9d1f2 100644
> --- a/drivers/net/virtio/virtio_rxtx_simple_sse.c
> +++ b/drivers/net/virtio/virtio_rxtx_simple_sse.c
> @@ -100,12 +100,6 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
>
> rte_prefetch0(rused);
>
> - if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> - virtio_rxq_rearm_vec(rxvq);
> - if (unlikely(virtqueue_kick_prepare(vq)))
> - virtqueue_notify(vq);
> - }
> -
> nb_total = nb_used;
> ref_rx_pkts = rx_pkts;
> for (nb_pkts_received = 0;
> @@ -194,5 +188,11 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
> for (nb_used = 0; nb_used < nb_pkts_received; nb_used++)
> virtio_update_packet_stats(&rxvq->stats, ref_rx_pkts[nb_used]);
>
> + if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> + virtio_rxq_rearm_vec(rxvq);
> + if (unlikely(virtqueue_kick_prepare(vq)))
> + virtqueue_notify(vq);
> + }
> +
> return nb_pkts_received;
> }
> --
> 2.25.1
next prev parent reply other threads:[~2021-04-14 6:11 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-14 4:26 Xueming Li
2021-04-14 6:11 ` Xueming(Steven) Li [this message]
2021-04-14 14:14 ` [dpdk-stable] [PATCH v1] " Xueming Li
2021-04-16 20:58 ` David Christensen
2021-05-03 14:53 ` Maxime Coquelin
2021-05-04 8:26 ` Maxime Coquelin
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=BY5PR12MB4324C78DC7FB99B560E01251A14E9@BY5PR12MB4324.namprd12.prod.outlook.com \
--to=xuemingl@nvidia.com \
--cc=bruce.richardson@intel.com \
--cc=chenbo.xia@intel.com \
--cc=dev@dpdk.org \
--cc=drc@linux.vnet.ibm.com \
--cc=huawei.xhw@alibaba-inc.com \
--cc=huawei.xie@intel.com \
--cc=jerin.jacob@caviumnetworks.com \
--cc=jerinj@marvell.com \
--cc=jianbo.liu@linaro.org \
--cc=jianfeng.tan@intel.com \
--cc=konstantin.ananyev@intel.com \
--cc=maxime.coquelin@redhat.com \
--cc=ruifeng.wang@arm.com \
--cc=stable@dpdk.org \
--cc=yuanhan.liu@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).