DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Zhang, Qi Z" <qi.z.zhang@intel.com>
To: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>,
	Joyce Kong <Joyce.Kong@arm.com>,
	"Xing, Beilei" <beilei.xing@intel.com>,
	Ruifeng Wang <Ruifeng.Wang@arm.com>
Cc: "dev@dpdk.org" <dev@dpdk.org>, nd <nd@arm.com>, nd <nd@arm.com>
Subject: Re: [dpdk-dev] [PATCH v1] net/i40e: remove the SMP barrier in HW scanning func
Date: Wed, 16 Jun 2021 13:29:24 +0000	[thread overview]
Message-ID: <12226b6e56ad4c11845242031c9505d9@intel.com> (raw)
In-Reply-To: <DBAPR08MB58148913B8B4E1111502367498389@DBAPR08MB5814.eurprd08.prod.outlook.com>

Hi

> -----Original Message-----
> From: Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>
> Sent: Tuesday, June 8, 2021 5:36 AM
> To: Zhang, Qi Z <qi.z.zhang@intel.com>; Joyce Kong <Joyce.Kong@arm.com>;
> Xing, Beilei <beilei.xing@intel.com>; Ruifeng Wang <Ruifeng.Wang@arm.com>
> Cc: dev@dpdk.org; nd <nd@arm.com>; Honnappa Nagarahalli
> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>
> Subject: RE: [PATCH v1] net/i40e: remove the SMP barrier in HW scanning
> func
> 
> <snip>
> 
> > >
> > > > >
> > > > > Add the logic to determine how many DD bits have been set for
> > > > > contiguous packets, for removing the SMP barrier while reading descs.
> > > >
> > > > I didn't understand this.
> > > > The current logic already guarantee the read out DD bits are from
> > > > continue packets, as it read Rx descriptor in a reversed order
> > > > from the
> > ring.
> > > Qi, the comments in the code mention that there is a race condition
> > > if the descriptors are not read in the reverse order. But, they do
> > > not mention what the race condition is and how it can occur.
> > > Appreciate if you could explain that.
> >
> > The Race condition happens between the NIC and CPU, if write and read
> > DD bit in the same order, there might be a hole (e.g. 1011)  with the
> > reverse read order, we make sure no more "1" after the first "0"
> > as the read address are declared as volatile, compiler will not
> > re-ordered them.
> My understanding is that
> 
> 1) the NIC will write an entire cache line of descriptors to memory "atomically"
> (i.e. the entire cache line is visible to the CPU at once) if there are enough
> descriptors ready to fill one cache line.
> 2) But, if there are not enough descriptors ready (because for ex: there is not
> enough traffic), then it might write partial cache lines.

Yes, for example a cache line contains 4 x16 bytes descriptors and it is possible we get 1 1 1 0 for DD bit at some moment.

> 
> Please correct me if I am wrong.
> 
> For #1, I do not think it matters if we read the descriptors in reverse order or
> not as the cache line is written atomically.

I think below cases may happens if we don't read in reserve order.

1. CPU get first cache line as 1 1 1 0 in a loop
2. new packets coming and NIC append last 1 to the first cache and a new cache line with 1 1 1 1.
3. CPU continue new cache line with 1 1 1 1 in the same loop, but the last 1 of first cache line is missed, so finally it get 1 1 1 0 1 1 1 1. 


> For #1, if we read in reverse order, does it make sense to not check the DD bits
> of descriptors that are earlier in the order once we encounter a descriptor that
> has its DD bit set? This is because NIC updates the descriptors in order.

I think the answer is yes, when we met the first DD bit, we should able to calculated the exact number base on the index, but not sure how much performance gain.


> 
> >
> > >
> > > On x86, the reads are not re-ordered (though the compiler can
> > > re-order). On ARM, the reads can get re-ordered and hence the
> > > barriers are required. In order to avoid the barriers, we are trying
> > > to process only those descriptors whose DD bits are set such that
> > > they are contiguous. i.e. if the DD bits are 1011, we process only the first
> descriptor.
> >
> > Ok, I see. thanks for the explanation.
> > At this moment, I may prefer not change the behavior of x86, so
> > compile option for arm can be added, in future when we observe no
> > performance impact for x86 as well, we can consider to remove it, what do
> you think?
> I am ok with this approach.
> 
> >
> > >
> > > > So I didn't see the a new logic be added, would you describe more
> > > > clear about the purpose of this patch?
> > > >
> > > > >
> > > > > Signed-off-by: Joyce Kong <joyce.kong@arm.com>
> > > > > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> > > > > ---
> > > > >  drivers/net/i40e/i40e_rxtx.c | 13 ++++++++-----
> > > > >  1 file changed, 8 insertions(+), 5 deletions(-)
> > > > >
> > > > > diff --git a/drivers/net/i40e/i40e_rxtx.c
> > > > > b/drivers/net/i40e/i40e_rxtx.c index
> > > > > 6c58decec..410a81f30 100644
> > > > > --- a/drivers/net/i40e/i40e_rxtx.c
> > > > > +++ b/drivers/net/i40e/i40e_rxtx.c
> > > > > @@ -452,7 +452,7 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue
> > *rxq)
> > > > >  	uint16_t pkt_len;
> > > > >  	uint64_t qword1;
> > > > >  	uint32_t rx_status;
> > > > > -	int32_t s[I40E_LOOK_AHEAD], nb_dd;
> > > > > +	int32_t s[I40E_LOOK_AHEAD], var, nb_dd;
> > > > >  	int32_t i, j, nb_rx = 0;
> > > > >  	uint64_t pkt_flags;
> > > > >  	uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl; @@ -482,11
> > > > > +482,14 @@ i40e_rx_scan_hw_ring(struct i40e_rx_queue *rxq)
> > > > >  					I40E_RXD_QW1_STATUS_SHIFT;
> > > > >  		}
> > > > >
> > > > > -		rte_smp_rmb();
> > > >
> > > > Any performance gain by removing this? and it is not necessary to
> > > > be combined with below change, right?
> > > >
> > > > > -
> > > > >  		/* Compute how many status bits were set */
> > > > > -		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++)
> > > > > -			nb_dd += s[j] & (1 <<
> > > > I40E_RX_DESC_STATUS_DD_SHIFT);
> > > > > +		for (j = 0, nb_dd = 0; j < I40E_LOOK_AHEAD; j++) {
> > > > > +			var = s[j] & (1 << I40E_RX_DESC_STATUS_DD_SHIFT);
> > > > > +			if (var)
> > > > > +				nb_dd += 1;
> > > > > +			else
> > > > > +				break;
> > > > > +		}
> > > > >
> > > > >  		nb_rx += nb_dd;
> > > > >
> > > > > --
> > > > > 2.17.1


  parent reply	other threads:[~2021-06-16 13:29 UTC|newest]

Thread overview: 22+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-04  7:34 Joyce Kong
2021-06-04 16:12 ` Honnappa Nagarahalli
2021-06-06 14:17 ` Zhang, Qi Z
2021-06-06 18:33   ` Honnappa Nagarahalli
2021-06-07 14:55     ` Zhang, Qi Z
2021-06-07 21:36       ` Honnappa Nagarahalli
2021-06-15  6:30         ` Joyce Kong
2021-06-16 13:29         ` Zhang, Qi Z [this message]
2021-06-16 13:37           ` Bruce Richardson
2021-06-16 20:26             ` Honnappa Nagarahalli
2021-06-23  8:43 ` [dpdk-dev] [PATCH v2] net/i40e: add logic of processing continuous DD bits for Arm Joyce Kong
2021-06-30  1:14   ` Honnappa Nagarahalli
2021-07-05  3:41     ` Joyce Kong
2021-07-06  6:54 ` [dpdk-dev] [PATCH v3 0/2] fixes for i40e hw scan ring Joyce Kong
2021-07-06  6:54   ` [dpdk-dev] [PATCH v3 1/2] net/i40e: add logic of processing continuous DD bits for Arm Joyce Kong
2021-07-09  3:05     ` Zhang, Qi Z
2021-07-06  6:54   ` [dpdk-dev] [PATCH v3 2/2] net/i40e: replace SMP barrier with thread fence Joyce Kong
2021-07-08 12:09     ` Zhang, Qi Z
2021-07-08 13:51       ` Lance Richardson
2021-07-08 14:26         ` Zhang, Qi Z
2021-07-08 14:44           ` Honnappa Nagarahalli
2021-07-13  0:46     ` Zhang, Qi Z

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=12226b6e56ad4c11845242031c9505d9@intel.com \
    --to=qi.z.zhang@intel.com \
    --cc=Honnappa.Nagarahalli@arm.com \
    --cc=Joyce.Kong@arm.com \
    --cc=Ruifeng.Wang@arm.com \
    --cc=beilei.xing@intel.com \
    --cc=dev@dpdk.org \
    --cc=nd@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).