patches for DPDK stable branches
 help / color / mirror / Atom feed
From: Gavin Hu <gahu@nvidia.com>
To: <dev@dpdk.org>
Cc: <stable@dpdk.org>, Dariusz Sosnowski <dsosnowski@nvidia.com>,
	"Viacheslav Ovsiienko" <viacheslavo@nvidia.com>,
	Bing Zhao <bingz@nvidia.com>, Ori Kam <orika@nvidia.com>,
	Suanming Mou <suanmingm@nvidia.com>,
	Matan Azrad <matan@nvidia.com>,
	Alexander Kozyrev <akozyrev@nvidia.com>
Subject: [PATCH] net/mlx5: do not poll CQEs when no available elts
Date: Fri, 6 Dec 2024 02:58:11 +0200	[thread overview]
Message-ID: <20241206005811.948293-1-gahu@nvidia.com> (raw)

In certain situations, the receive queue (rxq) fails to replenish its
internal ring with memory buffers (mbufs) from the pool. This can happen
when the pool has a limited number of mbufs allocated, and the user
application holds incoming packets for an extended period, resulting in a
delayed release of mbufs. Consequently, the pool becomes depleted,
preventing the rxq from replenishing from it.

There was a bug in the behavior of the vectorized rxq_cq_process_v routine,
which handled completion queue entries (CQEs) in batches of four. This
routine consistently accessed four mbufs from the internal queue ring,
regardless of whether they had been replenished. As a result, it could
access mbufs that no longer belonged to the poll mode driver (PMD).

The fix involves checking if there are four replenished mbufs available
before allowing rxq_cq_process_v to handle the batch. Once replenishment
succeeds during the polling process, the routine will resume its operation.

Fixes: 1ded26239aa0 ("net/mlx5: refactor vectorized Rx")
Cc: stable@dpdk.org

Reported-by: Changqi Dingluo <dingluochangqi.ck@bytedance.com>
Signed-off-by: Gavin Hu <gahu@nvidia.com>
---
 drivers/net/mlx5/mlx5_rxtx_vec.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/net/mlx5/mlx5_rxtx_vec.c b/drivers/net/mlx5/mlx5_rxtx_vec.c
index 1872bf310c..1b701801c5 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec.c
+++ b/drivers/net/mlx5/mlx5_rxtx_vec.c
@@ -325,6 +325,9 @@ rxq_burst_v(struct mlx5_rxq_data *rxq, struct rte_mbuf **pkts,
 	/* Not to cross queue end. */
 	pkts_n = RTE_MIN(pkts_n, q_n - elts_idx);
 	pkts_n = RTE_MIN(pkts_n, q_n - cq_idx);
+	/* Not to move past the allocated mbufs. */
+	pkts_n = RTE_MIN(pkts_n, RTE_ALIGN_FLOOR(rxq->rq_ci - rxq->rq_pi,
+						MLX5_VPMD_DESCS_PER_LOOP));
 	if (!pkts_n) {
 		*no_cq = !rcvd_pkt;
 		return rcvd_pkt;
-- 
2.18.2


             reply	other threads:[~2024-12-06  0:59 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-12-06  0:58 Gavin Hu [this message]
2024-12-09  8:10 ` Slava Ovsiienko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20241206005811.948293-1-gahu@nvidia.com \
    --to=gahu@nvidia.com \
    --cc=akozyrev@nvidia.com \
    --cc=bingz@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=dsosnowski@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=stable@dpdk.org \
    --cc=suanmingm@nvidia.com \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).