patches for DPDK stable branches
 help / color / mirror / Atom feed
* [PATCH] net/mlx5: do not poll CQEs when no available elts
@ 2024-12-06  0:58 Gavin Hu
  2024-12-09  8:10 ` Slava Ovsiienko
  0 siblings, 1 reply; 2+ messages in thread
From: Gavin Hu @ 2024-12-06  0:58 UTC (permalink / raw)
  To: dev
  Cc: stable, Dariusz Sosnowski, Viacheslav Ovsiienko, Bing Zhao,
	Ori Kam, Suanming Mou, Matan Azrad, Alexander Kozyrev

In certain situations, the receive queue (rxq) fails to replenish its
internal ring with memory buffers (mbufs) from the pool. This can happen
when the pool has a limited number of mbufs allocated, and the user
application holds incoming packets for an extended period, resulting in a
delayed release of mbufs. Consequently, the pool becomes depleted,
preventing the rxq from replenishing from it.

There was a bug in the behavior of the vectorized rxq_cq_process_v routine,
which handled completion queue entries (CQEs) in batches of four. This
routine consistently accessed four mbufs from the internal queue ring,
regardless of whether they had been replenished. As a result, it could
access mbufs that no longer belonged to the poll mode driver (PMD).

The fix involves checking if there are four replenished mbufs available
before allowing rxq_cq_process_v to handle the batch. Once replenishment
succeeds during the polling process, the routine will resume its operation.

Fixes: 1ded26239aa0 ("net/mlx5: refactor vectorized Rx")
Cc: stable@dpdk.org

Reported-by: Changqi Dingluo <dingluochangqi.ck@bytedance.com>
Signed-off-by: Gavin Hu <gahu@nvidia.com>
---
 drivers/net/mlx5/mlx5_rxtx_vec.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c
index 1872bf310c..1b701801c5 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.c
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.c
@@ -325,6 +325,9 @@ rxq_burst_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts,
 	/* Not to cross queue end. */
 	pkts_n = RTE_MIN(pkts_n, q_n - elts_idx);
 	pkts_n = RTE_MIN(pkts_n, q_n - cq_idx);
+	/* Not to move past the allocated mbufs. */
+	pkts_n = RTE_MIN(pkts_n, RTE_ALIGN_FLOOR(rxq->rq_ci - rxq->rq_pi,
+						MLX5_VPMD_DESCS_PER_LOOP));
 	if (!pkts_n) {
 		*no_cq = !rcvd_pkt;
 		return rcvd_pkt;
-- 
2.18.2


^ permalink raw reply	[flat|nested] 2+ messages in thread

* RE: [PATCH] net/mlx5: do not poll CQEs when no available elts
  2024-12-06  0:58 [PATCH] net/mlx5: do not poll CQEs when no available elts Gavin Hu
@ 2024-12-09  8:10 ` Slava Ovsiienko
  0 siblings, 0 replies; 2+ messages in thread
From: Slava Ovsiienko @ 2024-12-09  8:10 UTC (permalink / raw)
  To: Gavin Hu, dev
  Cc: stable, Dariusz Sosnowski, Bing Zhao, Ori Kam, Suanming Mou,
	Matan Azrad, Alexander Kozyrev

Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

> -----Original Message-----
> From: Gavin Hu <gahu@nvidia.com>
> Sent: Friday, December 6, 2024 2:58 AM
> To: dev@dpdk.org
> Cc: stable@dpdk.org; Dariusz Sosnowski <dsosnowski@nvidia.com>; Slava
> Ovsiienko <viacheslavo@nvidia.com>; Bing Zhao <bingz@nvidia.com>; Ori
> Kam <orika@nvidia.com>; Suanming Mou <suanmingm@nvidia.com>; Matan
> Azrad <matan@nvidia.com>; Alexander Kozyrev <akozyrev@nvidia.com>
> Subject: [PATCH] net/mlx5: do not poll CQEs when no available elts
> 
> In certain situations, the receive queue (rxq) fails to replenish its internal ring
> with memory buffers (mbufs) from the pool. This can happen when the pool
> has a limited number of mbufs allocated, and the user application holds
> incoming packets for an extended period, resulting in a delayed release of
> mbufs. Consequently, the pool becomes depleted, preventing the rxq from
> replenishing from it.
> 
> There was a bug in the behavior of the vectorized rxq_cq_process_v routine,
> which handled completion queue entries (CQEs) in batches of four. This
> routine consistently accessed four mbufs from the internal queue ring,
> regardless of whether they had been replenished. As a result, it could access
> mbufs that no longer belonged to the poll mode driver (PMD).
> 
> The fix involves checking if there are four replenished mbufs available before
> allowing rxq_cq_process_v to handle the batch. Once replenishment succeeds
> during the polling process, the routine will resume its operation.
> 
> Fixes: 1ded26239aa0 ("net/mlx5: refactor vectorized Rx")
> Cc: stable@dpdk.org
> 
> Reported-by: Changqi Dingluo <dingluochangqi.ck@bytedance.com>
> Signed-off-by: Gavin Hu <gahu@nvidia.com>
> ---
>  drivers/net/mlx5/mlx5_rxtx_vec.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c
> b/drivers/net/mlx5/mlx5_rxtx_vec.c
> index 1872bf310c..1b701801c5 100644
> --- a/drivers/net/mlx5/mlx5_rxtx_vec.c
> +++ b/drivers/net/mlx5/mlx5_rxtx_vec.c
> @@ -325,6 +325,9 @@ rxq_burst_v(struct mlx5_rxq_data *rxq, struct
> rte_mbuf **pkts,
>  	/* Not to cross queue end. */
>  	pkts_n = RTE_MIN(pkts_n, q_n - elts_idx);
>  	pkts_n = RTE_MIN(pkts_n, q_n - cq_idx);
> +	/* Not to move past the allocated mbufs. */
> +	pkts_n = RTE_MIN(pkts_n, RTE_ALIGN_FLOOR(rxq->rq_ci - rxq-
> >rq_pi,
> +
> 	MLX5_VPMD_DESCS_PER_LOOP));
>  	if (!pkts_n) {
>  		*no_cq = !rcvd_pkt;
>  		return rcvd_pkt;
> --
> 2.18.2


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2024-12-09  8:10 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-12-06  0:58 [PATCH] net/mlx5: do not poll CQEs when no available elts Gavin Hu
2024-12-09  8:10 ` Slava Ovsiienko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).