From: Ruifeng Wang <ruifeng.wang@arm.com>
To: rasland@nvidia.com, matan@nvidia.com, viacheslavo@nvidia.com
Cc: dev@dpdk.org, honnappa.nagarahalli@arm.com, stable@dpdk.org,
nd@arm.com, Ruifeng Wang <ruifeng.wang@arm.com>,
Ali Alnubani <alialnu@nvidia.com>
Subject: [PATCH v2] net/mlx5: fix risk in Rx descriptor read in NEON vector path
Date: Tue, 30 May 2023 13:48:04 +0800 [thread overview]
Message-ID: <20230530054804.4101060-1-ruifeng.wang@arm.com> (raw)
In-Reply-To: <20220104030056.268974-1-ruifeng.wang@arm.com>
In NEON vector PMD, vector load loads two contiguous 8B of
descriptor data into vector register. Given vector load ensures no
16B atomicity, read of the word that includes op_own field could be
reordered after read of other words. In this case, some words could
contain invalid data.
Reloaded qword0 after read barrier to update vector register. This
ensures that the fetched data is correct.
Testpmd single core test on N1SDP/ThunderX2 showed no performance drop.
Fixes: 1742c2d9fab0 ("net/mlx5: fix synchronization on polling Rx completions")
Cc: stable@dpdk.org
Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Tested-by: Ali Alnubani <alialnu@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
v2: Rebased and added tags that received.
drivers/net/mlx5/mlx5_rxtx_vec_neon.h | 8 ++++++++
1 file changed, 8 insertions(+)
diff --git a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
index 75e8ed7e5a..9079da65de 100644
--- a/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
+++ b/drivers/net/mlx5/mlx5_rxtx_vec_neon.h
@@ -675,6 +675,14 @@ rxq_cq_process_v(struct mlx5_rxq_data *rxq, volatile struct mlx5_cqe *cq,
c0 = vld1q_u64((uint64_t *)(p0 + 48));
/* Synchronize for loading the rest of blocks. */
rte_io_rmb();
+ /* B.0 (CQE 3) reload lower half of the block. */
+ c3 = vld1q_lane_u64((uint64_t *)(p3 + 48), c3, 0);
+ /* B.0 (CQE 2) reload lower half of the block. */
+ c2 = vld1q_lane_u64((uint64_t *)(p2 + 48), c2, 0);
+ /* B.0 (CQE 1) reload lower half of the block. */
+ c1 = vld1q_lane_u64((uint64_t *)(p1 + 48), c1, 0);
+ /* B.0 (CQE 0) reload lower half of the block. */
+ c0 = vld1q_lane_u64((uint64_t *)(p0 + 48), c0, 0);
/* Prefetch next 4 CQEs. */
if (pkts_n - pos >= 2 * MLX5_VPMD_DESCS_PER_LOOP) {
unsigned int next = pos + MLX5_VPMD_DESCS_PER_LOOP;
--
2.25.1
next prev parent reply other threads:[~2023-05-30 5:48 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-04 3:00 [PATCH] " Ruifeng Wang
2022-02-10 6:24 ` Ruifeng Wang
2022-02-10 8:16 ` Slava Ovsiienko
2022-02-10 8:29 ` Ruifeng Wang
2022-05-19 14:56 ` Ali Alnubani
2022-06-20 5:37 ` Slava Ovsiienko
2022-06-27 11:08 ` Ruifeng Wang
2022-06-29 7:55 ` Slava Ovsiienko
2022-06-29 11:41 ` Ruifeng Wang
2022-09-29 6:51 ` Ruifeng Wang
2023-03-07 16:59 ` Slava Ovsiienko
2023-05-30 5:48 ` Ruifeng Wang [this message]
2023-06-19 12:13 ` [PATCH v2] " Raslan Darawsheh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230530054804.4101060-1-ruifeng.wang@arm.com \
--to=ruifeng.wang@arm.com \
--cc=alialnu@nvidia.com \
--cc=dev@dpdk.org \
--cc=honnappa.nagarahalli@arm.com \
--cc=matan@nvidia.com \
--cc=nd@arm.com \
--cc=rasland@nvidia.com \
--cc=stable@dpdk.org \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).