DPDK patches and discussions
 help / color / mirror / Atom feed
* [dpdk-dev] [PATCH v2 0/2] net/mlx5: fixes for rx queue count calculation
@ 2020-11-12 15:39 Maxime Leroy
  2020-11-12 15:39 ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: fix Rx " Maxime Leroy
  2020-11-12 15:39 ` [dpdk-dev] [PATCH v2 2/2] net/mlx5: fix Rx descriptor status returned value Maxime Leroy
  0 siblings, 2 replies; 5+ messages in thread
From: Maxime Leroy @ 2020-11-12 15:39 UTC (permalink / raw)
  Cc: dev

This patchset provides several bug fixes for rx queue count calculation for mlx5 driver.

---
V2:
* squash first patch and second patch
* fix wrong init of used for compressed cqes

Didier Pallard (1):
  net/mlx5: fix Rx descriptor status returned value

Maxime Leroy (1):
  net/mlx5: fix Rx queue count calculation

 drivers/net/mlx5/mlx5_rxtx.c | 19 ++++++++++++-------
 1 file changed, 12 insertions(+), 7 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [dpdk-dev] [PATCH v2 1/2] net/mlx5: fix Rx queue count calculation
  2020-11-12 15:39 [dpdk-dev] [PATCH v2 0/2] net/mlx5: fixes for rx queue count calculation Maxime Leroy
@ 2020-11-12 15:39 ` Maxime Leroy
  2020-11-12 17:04   ` Slava Ovsiienko
  2020-11-12 15:39 ` [dpdk-dev] [PATCH v2 2/2] net/mlx5: fix Rx descriptor status returned value Maxime Leroy
  1 sibling, 1 reply; 5+ messages in thread
From: Maxime Leroy @ 2020-11-12 15:39 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko, Alexander Kozyrev
  Cc: dev, Nelio Laranjeiro

The commit d2d57605522d ("net/mlx5: fix Rx queue count calculation") is
uncorrect because the count calculation is wrong for the next cqe:

Example:

 Compressed Set of packets 1  |   Compressed Set of packets 2
C | a | e0 | e1 | e2 | e3 | e4 | e5 | C | a | e0

There are 2 compressed set of packets in the first queue. For the first
set, n is computed correctly.

But for the second, n is not computed properly. Because the zip context
is for the first set. The  second set is not yet decompressed, so
there are no context.

To fix the issue, we should only use the zip context for the first CQEs
serie.

Fixes: d2d57605522d ("net/mlx5: fix Rx queue count calculation")
Signed-off-by: Maxime Leroy <maxime.leroy@6wind.com>
Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>
---
 drivers/net/mlx5/mlx5_rxtx.c | 16 ++++++++++------
 1 file changed, 10 insertions(+), 6 deletions(-)

diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 844a1c63..2733dcd3 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -462,11 +462,18 @@ rx_queue_count(struct mlx5_rxq_data *rxq)
 {
 	struct rxq_zip *zip = &rxq->zip;
 	volatile struct mlx5_cqe *cqe;
-	unsigned int cq_ci = rxq->cq_ci;
 	const unsigned int cqe_n = (1 << rxq->cqe_n);
 	const unsigned int cqe_cnt = cqe_n - 1;
-	unsigned int used = 0;
+	unsigned int cq_ci, used;
 
+	/* if we are processing a compressed cqe */
+	if (zip->ai) {
+		used = zip->cqe_cnt - zip->ai;
+		cq_ci = zip->cq_ci;
+	} else {
+		used = 0;
+		cq_ci = rxq->cq_ci;
+	}
 	cqe = &(*rxq->cqes)[cq_ci & cqe_cnt];
 	while (check_cqe(cqe, cqe_n, cq_ci) != MLX5_CQE_STATUS_HW_OWN) {
 		int8_t op_own;
@@ -474,10 +481,7 @@ rx_queue_count(struct mlx5_rxq_data *rxq)
 
 		op_own = cqe->op_own;
 		if (MLX5_CQE_FORMAT(op_own) == MLX5_COMPRESSED)
-			if (unlikely(zip->ai))
-				n = zip->cqe_cnt - zip->ai;
-			else
-				n = rte_be_to_cpu_32(cqe->byte_cnt);
+			n = rte_be_to_cpu_32(cqe->byte_cnt);
 		else
 			n = 1;
 		cq_ci += n;
-- 
2.27.0


^ permalink raw reply	[flat|nested] 5+ messages in thread

* [dpdk-dev] [PATCH v2 2/2] net/mlx5: fix Rx descriptor status returned value
  2020-11-12 15:39 [dpdk-dev] [PATCH v2 0/2] net/mlx5: fixes for rx queue count calculation Maxime Leroy
  2020-11-12 15:39 ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: fix Rx " Maxime Leroy
@ 2020-11-12 15:39 ` Maxime Leroy
  2020-11-13 14:03   ` Slava Ovsiienko
  1 sibling, 1 reply; 5+ messages in thread
From: Maxime Leroy @ 2020-11-12 15:39 UTC (permalink / raw)
  To: Matan Azrad, Shahaf Shuler, Viacheslav Ovsiienko, Olivier Matz
  Cc: dev, Didier Pallard

From: Didier Pallard <didier.pallard@6wind.com>

One entry may contain several segments, so 'used' must be multiplied
by number of segments per entry to properly reflect the queue usage.

Fixes: 8788fec1f269 ("net/mlx5: implement descriptor status API")
Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
Signed-off-by: Maxime Leroy <maxime.leroy@6wind.com>
---
 drivers/net/mlx5/mlx5_rxtx.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c
index 2733dcd3..f390dd66 100644
--- a/drivers/net/mlx5/mlx5_rxtx.c
+++ b/drivers/net/mlx5/mlx5_rxtx.c
@@ -463,6 +463,7 @@ rx_queue_count(struct mlx5_rxq_data *rxq)
 	struct rxq_zip *zip = &rxq->zip;
 	volatile struct mlx5_cqe *cqe;
 	const unsigned int cqe_n = (1 << rxq->cqe_n);
+	const unsigned int sges_n = (1 << rxq->sges_n);
 	const unsigned int cqe_cnt = cqe_n - 1;
 	unsigned int cq_ci, used;
 
@@ -488,7 +489,7 @@ rx_queue_count(struct mlx5_rxq_data *rxq)
 		used += n;
 		cqe = &(*rxq->cqes)[cq_ci & cqe_cnt];
 	}
-	used = RTE_MIN(used, cqe_n);
+	used = RTE_MIN(used * sges_n, cqe_n);
 	return used;
 }
 
-- 
2.27.0


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-dev] [PATCH v2 1/2] net/mlx5: fix Rx queue count calculation
  2020-11-12 15:39 ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: fix Rx " Maxime Leroy
@ 2020-11-12 17:04   ` Slava Ovsiienko
  0 siblings, 0 replies; 5+ messages in thread
From: Slava Ovsiienko @ 2020-11-12 17:04 UTC (permalink / raw)
  To: Maxime Leroy, Matan Azrad, Shahaf Shuler, Alexander Kozyrev
  Cc: dev, NBU-Contact-N?lio Laranjeiro

> -----Original Message-----
> From: Maxime Leroy <maxime.leroy@6wind.com>
> Sent: Thursday, November 12, 2020 17:39
> To: Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>;
> Slava Ovsiienko <viacheslavo@nvidia.com>; Alexander Kozyrev
> <akozyrev@nvidia.com>
> Cc: dev@dpdk.org; NBU-Contact-N?lio Laranjeiro
> <nelio.laranjeiro@6wind.com>
> Subject: [PATCH v2 1/2] net/mlx5: fix Rx queue count calculation
> 
> The commit d2d57605522d ("net/mlx5: fix Rx queue count calculation") is
> uncorrect because the count calculation is wrong for the next cqe:
> 
> Example:
> 
>  Compressed Set of packets 1  |   Compressed Set of packets 2
> C | a | e0 | e1 | e2 | e3 | e4 | e5 | C | a | e0
> 
> There are 2 compressed set of packets in the first queue. For the first set, n is
> computed correctly.
> 
> But for the second, n is not computed properly. Because the zip context is for
> the first set. The  second set is not yet decompressed, so there are no context.
> 
> To fix the issue, we should only use the zip context for the first CQEs serie.
> 
> Fixes: d2d57605522d ("net/mlx5: fix Rx queue count calculation")
> Signed-off-by: Maxime Leroy <maxime.leroy@6wind.com>
> Signed-off-by: Nelio Laranjeiro <nelio.laranjeiro@6wind.com>

Thank you for the fix. The second patch is still on review - I have some doubts about final RTE_MIN(), checking.

Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [dpdk-dev] [PATCH v2 2/2] net/mlx5: fix Rx descriptor status returned value
  2020-11-12 15:39 ` [dpdk-dev] [PATCH v2 2/2] net/mlx5: fix Rx descriptor status returned value Maxime Leroy
@ 2020-11-13 14:03   ` Slava Ovsiienko
  0 siblings, 0 replies; 5+ messages in thread
From: Slava Ovsiienko @ 2020-11-13 14:03 UTC (permalink / raw)
  To: Maxime Leroy, Matan Azrad, Shahaf Shuler, Olivier Matz
  Cc: dev, Didier Pallard

Hi, Maxime

> -----Original Message-----
> From: Maxime Leroy <maxime.leroy@6wind.com>
> Sent: Thursday, November 12, 2020 17:39
> To: Matan Azrad <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>;
> Slava Ovsiienko <viacheslavo@nvidia.com>; Olivier Matz
> <olivier.matz@6wind.com>
> Cc: dev@dpdk.org; Didier Pallard <didier.pallard@6wind.com>
> Subject: [PATCH v2 2/2] net/mlx5: fix Rx descriptor status returned value
> 
> From: Didier Pallard <didier.pallard@6wind.com>
> 
> One entry may contain several segments, so 'used' must be multiplied by
> number of segments per entry to properly reflect the queue usage.
> 
> Fixes: 8788fec1f269 ("net/mlx5: implement descriptor status API")
> Signed-off-by: Didier Pallard <didier.pallard@6wind.com>
> Signed-off-by: Maxime Leroy <maxime.leroy@6wind.com>
> ---
>  drivers/net/mlx5/mlx5_rxtx.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/mlx5/mlx5_rxtx.c b/drivers/net/mlx5/mlx5_rxtx.c index
> 2733dcd3..f390dd66 100644
> --- a/drivers/net/mlx5/mlx5_rxtx.c
> +++ b/drivers/net/mlx5/mlx5_rxtx.c
> @@ -463,6 +463,7 @@ rx_queue_count(struct mlx5_rxq_data *rxq)
>  	struct rxq_zip *zip = &rxq->zip;
>  	volatile struct mlx5_cqe *cqe;
>  	const unsigned int cqe_n = (1 << rxq->cqe_n);
> +	const unsigned int sges_n = (1 << rxq->sges_n);
>  	const unsigned int cqe_cnt = cqe_n - 1;
>  	unsigned int cq_ci, used;
> 
> @@ -488,7 +489,7 @@ rx_queue_count(struct mlx5_rxq_data *rxq)
>  		used += n;
>  		cqe = &(*rxq->cqes)[cq_ci & cqe_cnt];
>  	}
> -	used = RTE_MIN(used, cqe_n);
> +	used = RTE_MIN(used * sges_n, cqe_n);

cqe_n reflects the number of the data descriptors in the RxQ, there might be convergency.
I suppose the clamping should be:
- for non-MPRQ rx_burst (regular and vectorized)  - (1 <<rxq->elts_n)
- for MPRQ (1 <<rxq->elts_n) * (1 << rxq->strd_n)

For non-MPRQ the rxq->strd_n is zero,  hence, it could look like:

used = RTE_MIN(used * sges_n, (1 <<rxq->elts_n) * (1 << rxq->strd_n));

With best regards,
Slava

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2020-11-13 14:04 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-12 15:39 [dpdk-dev] [PATCH v2 0/2] net/mlx5: fixes for rx queue count calculation Maxime Leroy
2020-11-12 15:39 ` [dpdk-dev] [PATCH v2 1/2] net/mlx5: fix Rx " Maxime Leroy
2020-11-12 17:04   ` Slava Ovsiienko
2020-11-12 15:39 ` [dpdk-dev] [PATCH v2 2/2] net/mlx5: fix Rx descriptor status returned value Maxime Leroy
2020-11-13 14:03   ` Slava Ovsiienko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).