DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH] net/virtio: fix vectorized Rx queue stuck
@ 2021-04-14  4:26 Xueming Li
  2021-04-14  6:11 ` Xueming(Steven) Li
  2021-04-14 14:14 ` [dpdk-dev] [PATCH v1] " Xueming Li
  0 siblings, 2 replies; 6+ messages in thread
From: Xueming Li @ 2021-04-14  4:26 UTC (permalink / raw)
  Cc: dev, xuemingl, huawei.xie, jerin.jacob, drc, stable,
	Maxime Coquelin, Chenbo Xia, Jerin Jacob, Ruifeng Wang,
	Bruce Richardson, Konstantin Ananyev, Jianfeng Tan, Jianbo Liu,
	Yuanhan Liu

When Rx burst size >= Rx queue size, all descriptors in used queue
consumed without rearm, the next Rx burst found no new packets and
returned directly without rearm as well.

This patch rearms available queue at once after rx_burst to avoid vq
hungry.

Fixes: fc3d66212fed ("virtio: add vector Rx")
Cc: huawei.xie@intel.com
Fixes: 2d7c37194ee4 ("net/virtio: add NEON based Rx handler")
Cc: jerin.jacob@caviumnetworks.com
Fixes: 52b5a707e6ca ("net/virtio: add Altivec Rx")
Cc: drc@linux.vnet.ibm.com
Cc: stable@dpdk.org

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/virtio/virtio_rxtx_simple_altivec.c | 12 ++++++------
 drivers/net/virtio/virtio_rxtx_simple_neon.c    | 12 ++++++------
 drivers/net/virtio/virtio_rxtx_simple_sse.c     | 12 ++++++------
 3 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/drivers/net/virtio/virtio_rxtx_simple_altivec.c b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
index 62e5100a48..1ffae234da 100644
--- a/drivers/net/virtio/virtio_rxtx_simple_altivec.c
+++ b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
@@ -102,12 +102,6 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	rte_prefetch0(rused);
 
-	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
-		virtio_rxq_rearm_vec(rxvq);
-		if (unlikely(virtqueue_kick_prepare(vq)))
-			virtqueue_notify(vq);
-	}
-
 	nb_total = nb_used;
 	ref_rx_pkts = rx_pkts;
 	for (nb_pkts_received = 0;
@@ -204,5 +198,11 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 	for (nb_used = 0; nb_used < nb_pkts_received; nb_used++)
 		virtio_update_packet_stats(&rxvq->stats, ref_rx_pkts[nb_used]);
 
+	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
+		virtio_rxq_rearm_vec(rxvq);
+		if (unlikely(virtqueue_kick_prepare(vq)))
+			virtqueue_notify(vq);
+	}
+
 	return nb_pkts_received;
 }
diff --git a/drivers/net/virtio/virtio_rxtx_simple_neon.c b/drivers/net/virtio/virtio_rxtx_simple_neon.c
index c8e4b13a02..341dedce41 100644
--- a/drivers/net/virtio/virtio_rxtx_simple_neon.c
+++ b/drivers/net/virtio/virtio_rxtx_simple_neon.c
@@ -100,12 +100,6 @@ virtio_recv_pkts_vec(void *rx_queue,
 
 	rte_prefetch_non_temporal(rused);
 
-	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
-		virtio_rxq_rearm_vec(rxvq);
-		if (unlikely(virtqueue_kick_prepare(vq)))
-			virtqueue_notify(vq);
-	}
-
 	nb_total = nb_used;
 	ref_rx_pkts = rx_pkts;
 	for (nb_pkts_received = 0;
@@ -210,5 +204,11 @@ virtio_recv_pkts_vec(void *rx_queue,
 	for (nb_used = 0; nb_used < nb_pkts_received; nb_used++)
 		virtio_update_packet_stats(&rxvq->stats, ref_rx_pkts[nb_used]);
 
+	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
+		virtio_rxq_rearm_vec(rxvq);
+		if (unlikely(virtqueue_kick_prepare(vq)))
+			virtqueue_notify(vq);
+	}
+
 	return nb_pkts_received;
 }
diff --git a/drivers/net/virtio/virtio_rxtx_simple_sse.c b/drivers/net/virtio/virtio_rxtx_simple_sse.c
index ff4eba33d6..2e17f9d1f2 100644
--- a/drivers/net/virtio/virtio_rxtx_simple_sse.c
+++ b/drivers/net/virtio/virtio_rxtx_simple_sse.c
@@ -100,12 +100,6 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	rte_prefetch0(rused);
 
-	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
-		virtio_rxq_rearm_vec(rxvq);
-		if (unlikely(virtqueue_kick_prepare(vq)))
-			virtqueue_notify(vq);
-	}
-
 	nb_total = nb_used;
 	ref_rx_pkts = rx_pkts;
 	for (nb_pkts_received = 0;
@@ -194,5 +188,11 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 	for (nb_used = 0; nb_used < nb_pkts_received; nb_used++)
 		virtio_update_packet_stats(&rxvq->stats, ref_rx_pkts[nb_used]);
 
+	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
+		virtio_rxq_rearm_vec(rxvq);
+		if (unlikely(virtqueue_kick_prepare(vq)))
+			virtqueue_notify(vq);
+	}
+
 	return nb_pkts_received;
 }
-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH] net/virtio: fix vectorized Rx queue stuck
  2021-04-14  4:26 [dpdk-dev] [PATCH] net/virtio: fix vectorized Rx queue stuck Xueming Li
@ 2021-04-14  6:11 ` Xueming(Steven) Li
  2021-04-14 14:14 ` [dpdk-dev] [PATCH v1] " Xueming Li
  1 sibling, 0 replies; 6+ messages in thread
From: Xueming(Steven) Li @ 2021-04-14  6:11 UTC (permalink / raw)
  To: Xueming(Steven) Li,
	谢华伟(此时此刻)
  Cc: dev, huawei.xie, jerin.jacob, drc, stable, Maxime Coquelin,
	Chenbo Xia, Jerin Jacob, Ruifeng Wang, Bruce Richardson,
	Konstantin Ananyev, Jianfeng Tan, Jianbo Liu, Yuanhan Liu

+@谢华伟(此时此刻) <huawei.xhw@alibaba-inc.com>

> -----Original Message-----
> From: Xueming Li <xuemingl@nvidia.com>
> Sent: Wednesday, April 14, 2021 12:27 PM
> Cc: dev@dpdk.org; Xueming(Steven) Li <xuemingl@nvidia.com>; huawei.xie@intel.com; jerin.jacob@caviumnetworks.com;
> drc@linux.vnet.ibm.com; stable@dpdk.org; Maxime Coquelin <maxime.coquelin@redhat.com>; Chenbo Xia <chenbo.xia@intel.com>;
> Jerin Jacob <jerinj@marvell.com>; Ruifeng Wang <ruifeng.wang@arm.com>; Bruce Richardson <bruce.richardson@intel.com>;
> Konstantin Ananyev <konstantin.ananyev@intel.com>; Jianfeng Tan <jianfeng.tan@intel.com>; Jianbo Liu <jianbo.liu@linaro.org>;
> Yuanhan Liu <yuanhan.liu@linux.intel.com>
> Subject: [PATCH] net/virtio: fix vectorized Rx queue stuck
> 
> When Rx burst size >= Rx queue size, all descriptors in used queue consumed without rearm, the next Rx burst found no new packets
> and returned directly without rearm as well.
> 
> This patch rearms available queue at once after rx_burst to avoid vq hungry.
> 
> Fixes: fc3d66212fed ("virtio: add vector Rx")
> Cc: huawei.xie@intel.com
> Fixes: 2d7c37194ee4 ("net/virtio: add NEON based Rx handler")
> Cc: jerin.jacob@caviumnetworks.com
> Fixes: 52b5a707e6ca ("net/virtio: add Altivec Rx")
> Cc: drc@linux.vnet.ibm.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> ---
>  drivers/net/virtio/virtio_rxtx_simple_altivec.c | 12 ++++++------
>  drivers/net/virtio/virtio_rxtx_simple_neon.c    | 12 ++++++------
>  drivers/net/virtio/virtio_rxtx_simple_sse.c     | 12 ++++++------
>  3 files changed, 18 insertions(+), 18 deletions(-)
> 
> diff --git a/drivers/net/virtio/virtio_rxtx_simple_altivec.c b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
> index 62e5100a48..1ffae234da 100644
> --- a/drivers/net/virtio/virtio_rxtx_simple_altivec.c
> +++ b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
> @@ -102,12 +102,6 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
> 
>  	rte_prefetch0(rused);
> 
> -	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> -		virtio_rxq_rearm_vec(rxvq);
> -		if (unlikely(virtqueue_kick_prepare(vq)))
> -			virtqueue_notify(vq);
> -	}
> -
>  	nb_total = nb_used;
>  	ref_rx_pkts = rx_pkts;
>  	for (nb_pkts_received = 0;
> @@ -204,5 +198,11 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
>  	for (nb_used = 0; nb_used < nb_pkts_received; nb_used++)
>  		virtio_update_packet_stats(&rxvq->stats, ref_rx_pkts[nb_used]);
> 
> +	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> +		virtio_rxq_rearm_vec(rxvq);
> +		if (unlikely(virtqueue_kick_prepare(vq)))
> +			virtqueue_notify(vq);
> +	}
> +
>  	return nb_pkts_received;
>  }
> diff --git a/drivers/net/virtio/virtio_rxtx_simple_neon.c b/drivers/net/virtio/virtio_rxtx_simple_neon.c
> index c8e4b13a02..341dedce41 100644
> --- a/drivers/net/virtio/virtio_rxtx_simple_neon.c
> +++ b/drivers/net/virtio/virtio_rxtx_simple_neon.c
> @@ -100,12 +100,6 @@ virtio_recv_pkts_vec(void *rx_queue,
> 
>  	rte_prefetch_non_temporal(rused);
> 
> -	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> -		virtio_rxq_rearm_vec(rxvq);
> -		if (unlikely(virtqueue_kick_prepare(vq)))
> -			virtqueue_notify(vq);
> -	}
> -
>  	nb_total = nb_used;
>  	ref_rx_pkts = rx_pkts;
>  	for (nb_pkts_received = 0;
> @@ -210,5 +204,11 @@ virtio_recv_pkts_vec(void *rx_queue,
>  	for (nb_used = 0; nb_used < nb_pkts_received; nb_used++)
>  		virtio_update_packet_stats(&rxvq->stats, ref_rx_pkts[nb_used]);
> 
> +	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> +		virtio_rxq_rearm_vec(rxvq);
> +		if (unlikely(virtqueue_kick_prepare(vq)))
> +			virtqueue_notify(vq);
> +	}
> +
>  	return nb_pkts_received;
>  }
> diff --git a/drivers/net/virtio/virtio_rxtx_simple_sse.c b/drivers/net/virtio/virtio_rxtx_simple_sse.c
> index ff4eba33d6..2e17f9d1f2 100644
> --- a/drivers/net/virtio/virtio_rxtx_simple_sse.c
> +++ b/drivers/net/virtio/virtio_rxtx_simple_sse.c
> @@ -100,12 +100,6 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
> 
>  	rte_prefetch0(rused);
> 
> -	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> -		virtio_rxq_rearm_vec(rxvq);
> -		if (unlikely(virtqueue_kick_prepare(vq)))
> -			virtqueue_notify(vq);
> -	}
> -
>  	nb_total = nb_used;
>  	ref_rx_pkts = rx_pkts;
>  	for (nb_pkts_received = 0;
> @@ -194,5 +188,11 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
>  	for (nb_used = 0; nb_used < nb_pkts_received; nb_used++)
>  		virtio_update_packet_stats(&rxvq->stats, ref_rx_pkts[nb_used]);
> 
> +	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> +		virtio_rxq_rearm_vec(rxvq);
> +		if (unlikely(virtqueue_kick_prepare(vq)))
> +			virtqueue_notify(vq);
> +	}
> +
>  	return nb_pkts_received;
>  }
> --
> 2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [dpdk-dev] [PATCH v1] net/virtio: fix vectorized Rx queue stuck
  2021-04-14  4:26 [dpdk-dev] [PATCH] net/virtio: fix vectorized Rx queue stuck Xueming Li
  2021-04-14  6:11 ` Xueming(Steven) Li
@ 2021-04-14 14:14 ` Xueming Li
  2021-04-16 20:58   ` David Christensen
                     ` (2 more replies)
  1 sibling, 3 replies; 6+ messages in thread
From: Xueming Li @ 2021-04-14 14:14 UTC (permalink / raw)
  Cc: .Xueming Li, dev,
	谢华伟
	(此时此刻),
	jerin.jacob, drc, stable, Maxime Coquelin, Chenbo Xia,
	Jerin Jacob, Ruifeng Wang, Bruce Richardson, Konstantin Ananyev,
	Jianfeng Tan, Huawei Xie, Jianbo Liu, Yuanhan Liu

From: ".Xueming Li" <xuemingl@nvidia.com>

When Rx queue worked in vectorized mode and rxd <= 512, under traffic of
high PPS rate, testpmd often start and receive packets of rxd without
further growth.

Testpmd started with rxq flush which tried to rx MAX_PKT_BURST(512)
packets and drop. When Rx burst size >= Rx queue size, all descriptors
in used queue consumed without rearm, device can't receive more packets.
The next Rx burst returned at once since no used descriptors found,
rearm logic was skipped, rx vq kept in starving state.

To avoid rx vq starving, this patch always check the available queue,
rearm if needed even no used descriptor reported by device.

Fixes: fc3d66212fed ("virtio: add vector Rx")
Cc: 谢华伟(此时此刻) <huawei.xhw@alibaba-inc.com>
Fixes: 2d7c37194ee4 ("net/virtio: add NEON based Rx handler")
Cc: jerin.jacob@caviumnetworks.com
Fixes: 52b5a707e6ca ("net/virtio: add Altivec Rx")
Cc: drc@linux.vnet.ibm.com
Cc: stable@dpdk.org

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
---
 drivers/net/virtio/virtio_rxtx_simple_altivec.c | 12 ++++++------
 drivers/net/virtio/virtio_rxtx_simple_neon.c    | 12 ++++++------
 drivers/net/virtio/virtio_rxtx_simple_sse.c     | 12 ++++++------
 3 files changed, 18 insertions(+), 18 deletions(-)

diff --git a/drivers/net/virtio/virtio_rxtx_simple_altivec.c b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
index 62e5100a48..7534974ef4 100644
--- a/drivers/net/virtio/virtio_rxtx_simple_altivec.c
+++ b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
@@ -85,6 +85,12 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 	if (unlikely(nb_pkts < RTE_VIRTIO_DESC_PER_LOOP))
 		return 0;
 
+	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
+		virtio_rxq_rearm_vec(rxvq);
+		if (unlikely(virtqueue_kick_prepare(vq)))
+			virtqueue_notify(vq);
+	}
+
 	nb_used = virtqueue_nused(vq);
 
 	rte_compiler_barrier();
@@ -102,12 +108,6 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	rte_prefetch0(rused);
 
-	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
-		virtio_rxq_rearm_vec(rxvq);
-		if (unlikely(virtqueue_kick_prepare(vq)))
-			virtqueue_notify(vq);
-	}
-
 	nb_total = nb_used;
 	ref_rx_pkts = rx_pkts;
 	for (nb_pkts_received = 0;
diff --git a/drivers/net/virtio/virtio_rxtx_simple_neon.c b/drivers/net/virtio/virtio_rxtx_simple_neon.c
index c8e4b13a02..7fd92d1b0c 100644
--- a/drivers/net/virtio/virtio_rxtx_simple_neon.c
+++ b/drivers/net/virtio/virtio_rxtx_simple_neon.c
@@ -84,6 +84,12 @@ virtio_recv_pkts_vec(void *rx_queue,
 	if (unlikely(nb_pkts < RTE_VIRTIO_DESC_PER_LOOP))
 		return 0;
 
+	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
+		virtio_rxq_rearm_vec(rxvq);
+		if (unlikely(virtqueue_kick_prepare(vq)))
+			virtqueue_notify(vq);
+	}
+
 	/* virtqueue_nused has a load-acquire or rte_io_rmb inside */
 	nb_used = virtqueue_nused(vq);
 
@@ -100,12 +106,6 @@ virtio_recv_pkts_vec(void *rx_queue,
 
 	rte_prefetch_non_temporal(rused);
 
-	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
-		virtio_rxq_rearm_vec(rxvq);
-		if (unlikely(virtqueue_kick_prepare(vq)))
-			virtqueue_notify(vq);
-	}
-
 	nb_total = nb_used;
 	ref_rx_pkts = rx_pkts;
 	for (nb_pkts_received = 0;
diff --git a/drivers/net/virtio/virtio_rxtx_simple_sse.c b/drivers/net/virtio/virtio_rxtx_simple_sse.c
index ff4eba33d6..7577f5e86d 100644
--- a/drivers/net/virtio/virtio_rxtx_simple_sse.c
+++ b/drivers/net/virtio/virtio_rxtx_simple_sse.c
@@ -85,6 +85,12 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 	if (unlikely(nb_pkts < RTE_VIRTIO_DESC_PER_LOOP))
 		return 0;
 
+	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
+		virtio_rxq_rearm_vec(rxvq);
+		if (unlikely(virtqueue_kick_prepare(vq)))
+			virtqueue_notify(vq);
+	}
+
 	nb_used = virtqueue_nused(vq);
 
 	if (unlikely(nb_used == 0))
@@ -100,12 +106,6 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
 
 	rte_prefetch0(rused);
 
-	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
-		virtio_rxq_rearm_vec(rxvq);
-		if (unlikely(virtqueue_kick_prepare(vq)))
-			virtqueue_notify(vq);
-	}
-
 	nb_total = nb_used;
 	ref_rx_pkts = rx_pkts;
 	for (nb_pkts_received = 0;
-- 
2.25.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH v1] net/virtio: fix vectorized Rx queue stuck
  2021-04-14 14:14 ` [dpdk-dev] [PATCH v1] " Xueming Li
@ 2021-04-16 20:58   ` David Christensen
  2021-05-03 14:53   ` Maxime Coquelin
  2021-05-04  8:26   ` Maxime Coquelin
  2 siblings, 0 replies; 6+ messages in thread
From: David Christensen @ 2021-04-16 20:58 UTC (permalink / raw)
  To: Xueming Li
  Cc: dev,
	谢华伟
	(此时此刻),
	jerin.jacob, stable, Maxime Coquelin, Chenbo Xia, Jerin Jacob,
	Ruifeng Wang, Bruce Richardson, Konstantin Ananyev, Jianfeng Tan,
	Huawei Xie, Jianbo Liu, Yuanhan Liu

> When Rx queue worked in vectorized mode and rxd <= 512, under traffic of
> high PPS rate, testpmd often start and receive packets of rxd without
> further growth.
> 
> Testpmd started with rxq flush which tried to rx MAX_PKT_BURST(512)
> packets and drop. When Rx burst size >= Rx queue size, all descriptors
> in used queue consumed without rearm, device can't receive more packets.
> The next Rx burst returned at once since no used descriptors found,
> rearm logic was skipped, rx vq kept in starving state.
> 
> To avoid rx vq starving, this patch always check the available queue,
> rearm if needed even no used descriptor reported by device.
> 
> Fixes: fc3d66212fed ("virtio: add vector Rx")
> Cc: 谢华伟(此时此刻) <huawei.xhw@alibaba-inc.com>
> Fixes: 2d7c37194ee4 ("net/virtio: add NEON based Rx handler")
> Cc: jerin.jacob@caviumnetworks.com
> Fixes: 52b5a707e6ca ("net/virtio: add Altivec Rx")
> Cc: drc@linux.vnet.ibm.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> ---
>   drivers/net/virtio/virtio_rxtx_simple_altivec.c | 12 ++++++------
>   drivers/net/virtio/virtio_rxtx_simple_neon.c    | 12 ++++++------
>   drivers/net/virtio/virtio_rxtx_simple_sse.c     | 12 ++++++------
>   3 files changed, 18 insertions(+), 18 deletions(-)
> 
> diff --git a/drivers/net/virtio/virtio_rxtx_simple_altivec.c b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
> index 62e5100a48..7534974ef4 100644
> --- a/drivers/net/virtio/virtio_rxtx_simple_altivec.c
> +++ b/drivers/net/virtio/virtio_rxtx_simple_altivec.c
> @@ -85,6 +85,12 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
>   	if (unlikely(nb_pkts < RTE_VIRTIO_DESC_PER_LOOP))
>   		return 0;
> 
> +	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> +		virtio_rxq_rearm_vec(rxvq);
> +		if (unlikely(virtqueue_kick_prepare(vq)))
> +			virtqueue_notify(vq);
> +	}
> +
>   	nb_used = virtqueue_nused(vq);
> 
>   	rte_compiler_barrier();
> @@ -102,12 +108,6 @@ virtio_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts,
> 
>   	rte_prefetch0(rused);
> 
> -	if (vq->vq_free_cnt >= RTE_VIRTIO_VPMD_RX_REARM_THRESH) {
> -		virtio_rxq_rearm_vec(rxvq);
> -		if (unlikely(virtqueue_kick_prepare(vq)))
> -			virtqueue_notify(vq);
> -	}
> -
>   	nb_total = nb_used;
>   	ref_rx_pkts = rx_pkts;
>   	for (nb_pkts_received = 0;

Reviewed-by: David Christensen <drc@linux.vnet.ibm.com>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH v1] net/virtio: fix vectorized Rx queue stuck
  2021-04-14 14:14 ` [dpdk-dev] [PATCH v1] " Xueming Li
  2021-04-16 20:58   ` David Christensen
@ 2021-05-03 14:53   ` Maxime Coquelin
  2021-05-04  8:26   ` Maxime Coquelin
  2 siblings, 0 replies; 6+ messages in thread
From: Maxime Coquelin @ 2021-05-03 14:53 UTC (permalink / raw)
  To: Xueming Li
  Cc: dev,
	谢华伟
	(此时此刻),
	jerin.jacob, drc, stable, Chenbo Xia, Jerin Jacob, Ruifeng Wang,
	Bruce Richardson, Konstantin Ananyev, Jianfeng Tan, Huawei Xie,
	Jianbo Liu, Yuanhan Liu



On 4/14/21 4:14 PM, Xueming Li wrote:
> From: ".Xueming Li" <xuemingl@nvidia.com>
> 
> When Rx queue worked in vectorized mode and rxd <= 512, under traffic of
> high PPS rate, testpmd often start and receive packets of rxd without
> further growth.
> 
> Testpmd started with rxq flush which tried to rx MAX_PKT_BURST(512)
> packets and drop. When Rx burst size >= Rx queue size, all descriptors
> in used queue consumed without rearm, device can't receive more packets.
> The next Rx burst returned at once since no used descriptors found,
> rearm logic was skipped, rx vq kept in starving state.
> 
> To avoid rx vq starving, this patch always check the available queue,
> rearm if needed even no used descriptor reported by device.
> 
> Fixes: fc3d66212fed ("virtio: add vector Rx")
> Cc: 谢华伟(此时此刻) <huawei.xhw@alibaba-inc.com>
> Fixes: 2d7c37194ee4 ("net/virtio: add NEON based Rx handler")
> Cc: jerin.jacob@caviumnetworks.com
> Fixes: 52b5a707e6ca ("net/virtio: add Altivec Rx")
> Cc: drc@linux.vnet.ibm.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> ---
>  drivers/net/virtio/virtio_rxtx_simple_altivec.c | 12 ++++++------
>  drivers/net/virtio/virtio_rxtx_simple_neon.c    | 12 ++++++------
>  drivers/net/virtio/virtio_rxtx_simple_sse.c     | 12 ++++++------
>  3 files changed, 18 insertions(+), 18 deletions(-)
> 

Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>

Thanks,
Maxime


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [dpdk-dev] [PATCH v1] net/virtio: fix vectorized Rx queue stuck
  2021-04-14 14:14 ` [dpdk-dev] [PATCH v1] " Xueming Li
  2021-04-16 20:58   ` David Christensen
  2021-05-03 14:53   ` Maxime Coquelin
@ 2021-05-04  8:26   ` Maxime Coquelin
  2 siblings, 0 replies; 6+ messages in thread
From: Maxime Coquelin @ 2021-05-04  8:26 UTC (permalink / raw)
  To: Xueming Li
  Cc: dev,
	谢华伟
	(此时此刻),
	jerin.jacob, drc, stable, Chenbo Xia, Jerin Jacob, Ruifeng Wang,
	Bruce Richardson, Konstantin Ananyev, Jianfeng Tan, Huawei Xie,
	Jianbo Liu, Yuanhan Liu



On 4/14/21 4:14 PM, Xueming Li wrote:
> From: ".Xueming Li" <xuemingl@nvidia.com>
> 
> When Rx queue worked in vectorized mode and rxd <= 512, under traffic of
> high PPS rate, testpmd often start and receive packets of rxd without
> further growth.
> 
> Testpmd started with rxq flush which tried to rx MAX_PKT_BURST(512)
> packets and drop. When Rx burst size >= Rx queue size, all descriptors
> in used queue consumed without rearm, device can't receive more packets.
> The next Rx burst returned at once since no used descriptors found,
> rearm logic was skipped, rx vq kept in starving state.
> 
> To avoid rx vq starving, this patch always check the available queue,
> rearm if needed even no used descriptor reported by device.
> 
> Fixes: fc3d66212fed ("virtio: add vector Rx")
> Cc: 谢华伟(此时此刻) <huawei.xhw@alibaba-inc.com>
> Fixes: 2d7c37194ee4 ("net/virtio: add NEON based Rx handler")
> Cc: jerin.jacob@caviumnetworks.com
> Fixes: 52b5a707e6ca ("net/virtio: add Altivec Rx")
> Cc: drc@linux.vnet.ibm.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
> ---
>  drivers/net/virtio/virtio_rxtx_simple_altivec.c | 12 ++++++------
>  drivers/net/virtio/virtio_rxtx_simple_neon.c    | 12 ++++++------
>  drivers/net/virtio/virtio_rxtx_simple_sse.c     | 12 ++++++------
>  3 files changed, 18 insertions(+), 18 deletions(-)
> 

Applied to dpdk-next-virtio/main.

Thanks,
Maxime



^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-05-04  8:27 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-14  4:26 [dpdk-dev] [PATCH] net/virtio: fix vectorized Rx queue stuck Xueming Li
2021-04-14  6:11 ` Xueming(Steven) Li
2021-04-14 14:14 ` [dpdk-dev] [PATCH v1] " Xueming Li
2021-04-16 20:58   ` David Christensen
2021-05-03 14:53   ` Maxime Coquelin
2021-05-04  8:26   ` Maxime Coquelin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).