patches for DPDK stable branches
 help / color / mirror / Atom feed
From: "Zhao1, Wei" <wei.zhao1@intel.com>
To: "Zhang, Qi Z" <qi.z.zhang@intel.com>, "dev@dpdk.org" <dev@dpdk.org>
Cc: "Lu, Wenzhuo" <wenzhuo.lu@intel.com>,
	"stable@dpdk.org" <stable@dpdk.org>
Subject: Re: [dpdk-stable] [PATCH] net/ixgbe: fix Tx check descriptor status APIs error
Date: Mon, 25 Jun 2018 05:58:54 +0000	[thread overview]
Message-ID: <A2573D2ACFCADC41BB3BE09C6DE313CA07DFEF47@PGSMSX103.gar.corp.intel.com> (raw)
In-Reply-To: <039ED4275CED7440929022BC67E706115323C621@SHSMSX103.ccr.corp.intel.com>

Hi, qi

> -----Original Message-----
> From: Zhang, Qi Z
> Sent: Monday, June 25, 2018 10:49 AM
> To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; stable@dpdk.org
> Subject: RE: [PATCH] net/ixgbe: fix Tx check descriptor status APIs error
> 
> 
> 
> > -----Original Message-----
> > From: Zhao1, Wei
> > Sent: Monday, June 25, 2018 9:58 AM
> > To: Zhang, Qi Z <qi.z.zhang@intel.com>; dev@dpdk.org
> > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; stable@dpdk.org
> > Subject: RE: [PATCH] net/ixgbe: fix Tx check descriptor status APIs
> > error
> >
> > Hi,
> >
> > > -----Original Message-----
> > > From: Zhang, Qi Z
> > > Sent: Friday, June 22, 2018 9:47 PM
> > > To: Zhao1, Wei <wei.zhao1@intel.com>; dev@dpdk.org
> > > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; stable@dpdk.org
> > > Subject: RE: [PATCH] net/ixgbe: fix Tx check descriptor status APIs
> > > error
> > >
> > > Hi Wei:
> > >
> > > > -----Original Message-----
> > > > From: Zhao1, Wei
> > > > Sent: Friday, June 22, 2018 4:39 PM
> > > > To: dev@dpdk.org
> > > > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Qi Z
> > > > <qi.z.zhang@intel.com>; stable@dpdk.org; Zhao1, Wei
> > > > <wei.zhao1@intel.com>
> > > > Subject: [PATCH] net/ixgbe: fix Tx check descriptor status APIs
> > > > error
> > > >
> > > > This is a issue involve RS bit set rule in ixgbe.
> > > > Let us take function ixgbe_xmit_pkts_vec () as an example, in this
> > > > function RS bit will be set for descriptor with index
> > > > txq->tx_next_rs, and also descriptor free function
> > > > ixgbe_tx_free_bufs() also check RS bit for descriptor with index
> > > > txq->tx_next_rs, This is perfect ok. Let us take an example,
> > > > if app set tx_rs_thresh = 32 and nb_desc = 512, then ixgbe PMD
> > > > code will init
> > > > txq->tx_next_rs = 31 in function ixgbe_reset_tx_queue when tx
> > > > txq->queue
> > > setup.
> > > > And also txq->tx_next_rs will be update as 63, 95 and so on. But,
> > > > in the function ixgbe_dev_tx_descriptor_status(), the RS bit to
> > > > check is " desc = ((desc
> > > > + txq->tx_rs_thresh - 1) /
> > > > txq->tx_rs_thresh) * txq-tx_rs_thresh", which is 32 ,64, 96 and so on.
> > > > So, they are all wrong! In tx function of ixgbe_xmit_pkts_simple,
> > > > the RS bit rule is also the same, it also set index 31 ,64, 95.
> > > > we need to correct it.
> > >
> > > One question:
> > > does only the descriptor with RS bit will have DD status, or NIC
> > > will always update all descriptor's DD status but this happens when
> > > the next descriptor with RS bit has been sent?
> > > If it is the first case, I think you fix still have problem, because
> > > multi-seg mbuf or tso offload will break the 31, 63, 95 pattern
> > > See:
> > > 						nb_used = (uint16_t)(tx_pkt-
> > > >nb_segs + new_ctx);
> > >
> > > 						if (txp != NULL &&
> > >                                 nb_used + txq->nb_tx_used >=
> > txq->tx_rs_thresh)
> > >                         /* set RS on the previous packet in the burst */
> > >                         txp->read.cmd_type_len |=
> > >
> > rte_cpu_to_le_32(IXGBE_TXD_CMD_RS);
> > >
> > > so the possible solution is store each RS position in a list at tx,
> > > and find the next RS from the list in ixgbe_dev_tx_descriptor_status
> > >
> > > If it is the second case, it will be simple we don't need to check
> > > forward with tx_rs_thresh, just check the exact position ( I hope it
> > > is this case :))
> >
> > In this patch, code "desc = txq->sw_ring[desc].last_id;" will get the
> > last index for several segments packet, that  solve the case when
> > packet contain more than one segment.
> 
> Yes, but my understanding is we "set RS on the previous packet" but not the
> packet cross tx_rs_thresh boundary So even without multi-seg , it will be 30,
> 62, 94, but not 31, 63, 95, probably the reason we didn't see the issue, is
> because if we test it with 32 burst, the latest packet still will be set RS, so it
> will be 30,31, 62,63, 94, 95, but if we tested with 64 burst, I assume it will be
> 30, 62, 63, 94 ... right?

There are 3 tx packet function in ixgbe PMD:
ixgbe_xmit_pkts_vec AND ixgbe_xmit_pkts_simple set RS bit in index 31 63 and 9, they also do not support multiple segment packet tx.
ixgbe_xmit_pkts() will set RS bit as your description in index 30 62 and 94, it also set to 31 63 95 if we tx use 32 as burst number.
But ixgbe_xmit_cleanup function will check 31 63 and 95 DD bit to free descriptor, so this is a bug in ixgbe_xmit_pkts.
I will commit other patch to fix this problem.

 
> 
> >
> > >
> > > Regards
> > > Qi
> > > >
> > > > Fixes: a2919e13d95e ("net/ixgbe: implement descriptor status API")
> > > >
> > > > Signed-off-by: Wei Zhao <wei.zhao1@intel.com>
> > > > ---
> > > >  drivers/net/ixgbe/ixgbe_rxtx.c | 12 ++++++------
> > > >  1 file changed, 6 insertions(+), 6 deletions(-)
> > > >
> > > > diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c
> > > > b/drivers/net/ixgbe/ixgbe_rxtx.c index 3e13d26..f185219 100644
> > > > --- a/drivers/net/ixgbe/ixgbe_rxtx.c
> > > > +++ b/drivers/net/ixgbe/ixgbe_rxtx.c
> > > > @@ -3146,15 +3146,15 @@ ixgbe_dev_tx_descriptor_status(void
> > > *tx_queue,
> > > > uint16_t offset)
> > > >  		return -EINVAL;
> > > >
> > > >  	desc = txq->tx_tail + offset;
> > > > +	if (desc >= txq->nb_tx_desc)
> > > > +		desc -= txq->nb_tx_desc;
> > > >  	/* go to next desc that has the RS bit */
> > > > -	desc = ((desc + txq->tx_rs_thresh - 1) / txq->tx_rs_thresh) *
> > > > -		txq->tx_rs_thresh;
> > > > -	if (desc >= txq->nb_tx_desc) {
> > > > +	desc = (desc  / txq->tx_rs_thresh + 1) *
> > > > +			txq->tx_rs_thresh - 1;
> > > > +	if (desc >= txq->nb_tx_desc)
> > > >  		desc -= txq->nb_tx_desc;
> > > > -		if (desc >= txq->nb_tx_desc)
> > > > -			desc -= txq->nb_tx_desc;
> > > > -	}
> > > >
> > > > +	desc = txq->sw_ring[desc].last_id;
> > > >  	status = &txq->tx_ring[desc].wb.status;
> > > >  	if (*status & rte_cpu_to_le_32(IXGBE_ADVTXD_STAT_DD))
> > > >  		return RTE_ETH_TX_DESC_DONE;
> > > > --
> > > > 2.7.5

  reply	other threads:[~2018-06-25  5:58 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-22  8:38 Wei Zhao
2018-06-22 13:47 ` Zhang, Qi Z
2018-06-25  1:57   ` Zhao1, Wei
2018-06-25  2:48     ` Zhang, Qi Z
2018-06-25  5:58       ` Zhao1, Wei [this message]
2018-06-25  6:49       ` Zhao1, Wei
2018-06-25  7:47         ` Zhang, Qi Z
2018-06-25  7:52           ` Zhao1, Wei
2018-06-25  7:55             ` Zhang, Qi Z
2018-06-25  8:41               ` Zhao1, Wei
2018-06-26  1:24 ` [dpdk-stable] [PATCH v2] " Wei Zhao

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=A2573D2ACFCADC41BB3BE09C6DE313CA07DFEF47@PGSMSX103.gar.corp.intel.com \
    --to=wei.zhao1@intel.com \
    --cc=dev@dpdk.org \
    --cc=qi.z.zhang@intel.com \
    --cc=stable@dpdk.org \
    --cc=wenzhuo.lu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).