From: Ruifeng Wang <Ruifeng.Wang@arm.com>
To: Slava Ovsiienko <viacheslavo@nvidia.com>,
Ali Alnubani <alialnu@nvidia.com>, Matan Azrad <matan@nvidia.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>,
Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>,
"stable@dpdk.org" <stable@dpdk.org>, nd <nd@arm.com>,
nd <nd@arm.com>
Subject: RE: [PATCH] net/mlx5: fix risk in Rx descriptor read in NEON vector path
Date: Thu, 29 Sep 2022 06:51:20 +0000 [thread overview]
Message-ID: <AS8PR08MB708010B02ACC5528662B404E9E579@AS8PR08MB7080.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <AS8PR08MB7080F9195EB49CAB6DBA35C39EBB9@AS8PR08MB7080.eurprd08.prod.outlook.com>
> -----Original Message-----
> From: Ruifeng Wang
> Sent: Wednesday, June 29, 2022 7:41 PM
> To: Slava Ovsiienko <viacheslavo@nvidia.com>; Ali Alnubani <alialnu@nvidia.com>; Matan
> Azrad <matan@nvidia.com>
> Cc: dev@dpdk.org; Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; stable@dpdk.org; nd
> <nd@arm.com>; nd <nd@arm.com>; nd <nd@arm.com>
> Subject: RE: [PATCH] net/mlx5: fix risk in Rx descriptor read in NEON vector path
>
> > -----Original Message-----
> > From: Slava Ovsiienko <viacheslavo@nvidia.com>
> > Sent: Wednesday, June 29, 2022 3:55 PM
> > To: Ruifeng Wang <Ruifeng.Wang@arm.com>; Ali Alnubani
> > <alialnu@nvidia.com>; Matan Azrad <matan@nvidia.com>
> > Cc: dev@dpdk.org; Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>;
> > stable@dpdk.org; nd <nd@arm.com>; nd <nd@arm.com>
> > Subject: RE: [PATCH] net/mlx5: fix risk in Rx descriptor read in NEON
> > vector path
> >
> > Hi, Ruifeng
> >
> > > -----Original Message-----
> > > From: Ruifeng Wang <Ruifeng.Wang@arm.com>
> > > Sent: Monday, June 27, 2022 14:08
> > > To: Slava Ovsiienko <viacheslavo@nvidia.com>; Ali Alnubani
> > > <alialnu@nvidia.com>; Matan Azrad <matan@nvidia.com>
> > > Cc: dev@dpdk.org; Honnappa Nagarahalli
> > <Honnappa.Nagarahalli@arm.com>;
> > > stable@dpdk.org; nd <nd@arm.com>; nd <nd@arm.com>
> > > Subject: RE: [PATCH] net/mlx5: fix risk in Rx descriptor read in
> > > NEON vector path
> > >
> > > > -----Original Message-----
> > > > From: Slava Ovsiienko <viacheslavo@nvidia.com>
> > > > Sent: Monday, June 20, 2022 1:38 PM
> > > > To: Ali Alnubani <alialnu@nvidia.com>; Ruifeng Wang
> > > > <Ruifeng.Wang@arm.com>; Matan Azrad <matan@nvidia.com>
> > > > Cc: dev@dpdk.org; Honnappa Nagarahalli
> > > > <Honnappa.Nagarahalli@arm.com>; stable@dpdk.org; nd <nd@arm.com>
> > > > Subject: RE: [PATCH] net/mlx5: fix risk in Rx descriptor read in
> > > > NEON vector path
> > > >
> > > > Hi, Ruifeng
> > >
> > > Hi Slava,
> > >
> > > Thanks for your review.
> > > >
> > > > My apologies for review delay.
> > >
> > > Apologies too. I was on something else.
> > >
> > > > As far I understand the hypothetical problem scenario is:
> > > > - CPU core reorders reading of qwords of 16B vector
> > > > - core reads the second 8B of CQE (old CQE values)
> > > > - CQE update
> > > > - core reads the first 8B of CQE (new CQE values)
> > >
> > > Yes, This is the problem.
> > > >
> > > > How the re-reading of CQEs can resolve the issue?
> > > > This wrong scenario might happen on the second read and we would
> > > > run into the same issue.
> > >
> > > Here we are trying to ordering reading of a 16B vector (8B with
> > > op_own
> > > - high, and 8B without op_own - low).
> > > The first read will load 16B. The second read will load and update
> > > low 8B (no op_own).
> > OK, I got the point, thank you for the explanations.
> > Can we avoid the first reading of low 8B (no containing CQE owning field)?
> >
> > I mean to update this part to read only upper 8Bs:
> > /* B.0 (CQE 3) load a block having op_own. */
> > c3 = vld1q_u64((uint64_t *)(p3 + 48));
> > /* B.0 (CQE 2) load a block having op_own. */
> > c2 = vld1q_u64((uint64_t *)(p2 + 48));
> > /* B.0 (CQE 1) load a block having op_own. */
> > c1 = vld1q_u64((uint64_t *)(p1 + 48));
> > /* B.0 (CQE 0) load a block having op_own. */
> > c0 = vld1q_u64((uint64_t *)(p0 + 48));
> > /* Synchronize for loading the rest of blocks. */
> > rte_io_rmb();
> >
> > Because lower 8Bs will be overlapped with the second read (in your
> > patch) and barrier ensures the correct order.
>
> Hi Slava,
>
> Yes, your suggestion is valid.
> Actually, I tried that approach: load higher 8B + barrier + load lower 8B + combine the
> two 8Bs into a vector.
> It also has no observable performance impact but generates more instructions compared to
> the current patch (the 'combine' operation).
> So I followed current approach.
>
> Thanks.
> >
Hi Slava,
Are there any further comments?
Thanks,
Ruifeng
next prev parent reply other threads:[~2022-09-29 6:51 UTC|newest]
Thread overview: 13+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-01-04 3:00 Ruifeng Wang
2022-02-10 6:24 ` Ruifeng Wang
2022-02-10 8:16 ` Slava Ovsiienko
2022-02-10 8:29 ` Ruifeng Wang
2022-05-19 14:56 ` Ali Alnubani
2022-06-20 5:37 ` Slava Ovsiienko
2022-06-27 11:08 ` Ruifeng Wang
2022-06-29 7:55 ` Slava Ovsiienko
2022-06-29 11:41 ` Ruifeng Wang
2022-09-29 6:51 ` Ruifeng Wang [this message]
2023-03-07 16:59 ` Slava Ovsiienko
2023-05-30 5:48 ` [PATCH v2] " Ruifeng Wang
2023-06-19 12:13 ` Raslan Darawsheh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=AS8PR08MB708010B02ACC5528662B404E9E579@AS8PR08MB7080.eurprd08.prod.outlook.com \
--to=ruifeng.wang@arm.com \
--cc=Honnappa.Nagarahalli@arm.com \
--cc=alialnu@nvidia.com \
--cc=dev@dpdk.org \
--cc=matan@nvidia.com \
--cc=nd@arm.com \
--cc=stable@dpdk.org \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).