DPDK patches and discussions
 help / color / mirror / Atom feed
From: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
To: Bruce Richardson <bruce.richardson@intel.com>,
	"Zhang, Qi Z" <qi.z.zhang@intel.com>
Cc: Joyce Kong <Joyce.Kong@arm.com>,
	"Xing, Beilei" <beilei.xing@intel.com>,
	Ruifeng Wang <Ruifeng.Wang@arm.com>,
	"dev@dpdk.org" <dev@dpdk.org>, nd <nd@arm.com>, nd <nd@arm.com>
Subject: Re: [dpdk-dev] [PATCH v1] net/i40e: remove the SMP barrier in HW scanning func
Date: Wed, 16 Jun 2021 20:26:00 +0000	[thread overview]
Message-ID: <DBAPR08MB58146AAD840F59FBDC261479980F9@DBAPR08MB5814.eurprd08.prod.outlook.com> (raw)
In-Reply-To: <YMn+oFf8Ek2Zq5uy@bricha3-MOBL.ger.corp.intel.com>

<snip>

> > > > >
> > > > > > >
> > > > > > > Add the logic to determine how many DD bits have been set
> > > > > > > for contiguous packets, for removing the SMP barrier while reading
> descs.
> > > > > >
> > > > > > I didn't understand this.
> > > > > > The current logic already guarantee the read out DD bits are
> > > > > > from continue packets, as it read Rx descriptor in a reversed
> > > > > > order from the
> > > > ring.
> > > > > Qi, the comments in the code mention that there is a race
> > > > > condition if the descriptors are not read in the reverse order.
> > > > > But, they do not mention what the race condition is and how it can
> occur.
> > > > > Appreciate if you could explain that.
> > > >
> > > > The Race condition happens between the NIC and CPU, if write and
> > > > read DD bit in the same order, there might be a hole (e.g. 1011)
> > > > with the reverse read order, we make sure no more "1" after the first "0"
> > > > as the read address are declared as volatile, compiler will not
> > > > re-ordered them.
> > > My understanding is that
> > >
> > > 1) the NIC will write an entire cache line of descriptors to memory
> "atomically"
> > > (i.e. the entire cache line is visible to the CPU at once) if there
> > > are enough descriptors ready to fill one cache line.
> > > 2) But, if there are not enough descriptors ready (because for ex:
> > > there is not enough traffic), then it might write partial cache lines.
> >
> > Yes, for example a cache line contains 4 x16 bytes descriptors and it is
> possible we get 1 1 1 0 for DD bit at some moment.
> >
> > >
> > > Please correct me if I am wrong.
> > >
> > > For #1, I do not think it matters if we read the descriptors in
> > > reverse order or not as the cache line is written atomically.
> >
> > I think below cases may happens if we don't read in reserve order.
> >
> > 1. CPU get first cache line as 1 1 1 0 in a loop 2. new packets coming
> > and NIC append last 1 to the first cache and a new cache line with 1 1 1 1.
> > 3. CPU continue new cache line with 1 1 1 1 in the same loop, but the last 1
> of first cache line is missed, so finally it get 1 1 1 0 1 1 1 1.
> >
> 
> The one-sentence answer here is: when two entities are moving along a line in
> the same direction - like two runners in a race - then they can pass each other
> multiple times as each goes slower or faster at any point in time, whereas if
> they are moving in opposite directions there will only ever be one cross-over
> point no matter how the speed of each changes.
> 
> In the case of NIC and software this fact means that there will always be a
> clear cross-over point from DD set to not-set.
Thanks Bruce, that is a great analogy to describe the problem assuming that the reads are actually happening in the program order.

On Arm platform, even though the program is reading in reverse order, the reads might get executed in any random order. We have 2 solutions here:
1) Enforced the order with barriers or
2) Only process descriptors with contiguous DD bits set

> 
> >
> > > For #1, if we read in reverse order, does it make sense to not check
> > > the DD bits of descriptors that are earlier in the order once we
> > > encounter a descriptor that has its DD bit set? This is because NIC updates
> the descriptors in order.
> >
> > I think the answer is yes, when we met the first DD bit, we should able to
> calculated the exact number base on the index, but not sure how much
> performance gain.
> >
> The other factors here are:
> 1. The driver does not do a straight read of all 32 DD bits in one go, rather it
> does 8 at a time and aborts at the end of a set of 8 if not all are valid.
> 2. For any that are set, we have to read the descriptor anyway to get the
> packet data out of it, so in the shortcut case of the last descriptor being set,
> we still have to read the other 7 anyway, and DD comes for free as part of it.
> 3. Blindly reading 8 at a time reduces the branching to just a single decision
> point at the end of each set of 8, reducing possible branch mispredicts.
Agree.
I think there is another requirement. The other words in the descriptor should be read only after reading the word containing the DD bit.

On x86, the program order takes care of this (although compiler barrier is required).
On Arm, this needs to be taken care explicitly using barriers.

  reply	other threads:[~2021-06-16 20:26 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-04  7:34 Joyce Kong
2021-06-04 16:12 ` Honnappa Nagarahalli
2021-06-06 14:17 ` Zhang, Qi Z
2021-06-06 18:33   ` Honnappa Nagarahalli
2021-06-07 14:55     ` Zhang, Qi Z
2021-06-07 21:36       ` Honnappa Nagarahalli
2021-06-15  6:30         ` Joyce Kong
2021-06-16 13:29         ` Zhang, Qi Z
2021-06-16 13:37           ` Bruce Richardson
2021-06-16 20:26             ` Honnappa Nagarahalli [this message]
2021-06-23  8:43 ` [dpdk-dev] [PATCH v2] net/i40e: add logic of processing continuous DD bits for Arm Joyce Kong
2021-06-30  1:14   ` Honnappa Nagarahalli
2021-07-05  3:41     ` Joyce Kong
2021-07-06  6:54 ` [dpdk-dev] [PATCH v3 0/2] fixes for i40e hw scan ring Joyce Kong
2021-07-06  6:54   ` [dpdk-dev] [PATCH v3 1/2] net/i40e: add logic of processing continuous DD bits for Arm Joyce Kong
2021-07-09  3:05     ` Zhang, Qi Z
2021-07-06  6:54   ` [dpdk-dev] [PATCH v3 2/2] net/i40e: replace SMP barrier with thread fence Joyce Kong
2021-07-08 12:09     ` Zhang, Qi Z
2021-07-08 13:51       ` Lance Richardson
2021-07-08 14:26         ` Zhang, Qi Z
2021-07-08 14:44           ` Honnappa Nagarahalli
2021-07-13  0:46     ` Zhang, Qi Z

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DBAPR08MB58146AAD840F59FBDC261479980F9@DBAPR08MB5814.eurprd08.prod.outlook.com \
    --to=honnappa.nagarahalli@arm.com \
    --cc=Joyce.Kong@arm.com \
    --cc=Ruifeng.Wang@arm.com \
    --cc=beilei.xing@intel.com \
    --cc=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=nd@arm.com \
    --cc=qi.z.zhang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).