DPDK patches and discussions
 help / color / mirror / Atom feed
From: "Richardson, Bruce" <bruce.richardson@intel.com>
To: "Roger B. Melton" <rmelton@cisco.com>,
	"Yigit, Ferruh" <ferruh.yigit@intel.com>
Cc: "Lu, Wenzhuo" <wenzhuo.lu@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"Ananyev, Konstantin" <konstantin.ananyev@intel.com>
Subject: Re: [dpdk-dev] [PATCH v2] net/e1000: correct VLAN tag byte order for i35x LB packets
Date: Wed, 25 Oct 2017 20:48:28 +0000	[thread overview]
Message-ID: <59AF69C657FD0841A61C55336867B5B0721ED1C2@IRSMSX103.ger.corp.intel.com> (raw)
In-Reply-To: <6256a52f-43f1-3e3f-61af-932b4bd26955@cisco.com>



> -----Original Message-----
> From: Roger B. Melton [mailto:rmelton@cisco.com]
> Sent: Wednesday, October 25, 2017 9:45 PM
> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; dev@dpdk.org; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>
> Subject: Re: [dpdk-dev] [PATCH v2] net/e1000: correct VLAN tag byte order
> for i35x LB packets
> 
> On 10/25/17 4:22 PM, Ferruh Yigit wrote:
> > On 10/25/2017 1:16 PM, Bruce Richardson wrote:
> >> On Wed, Oct 25, 2017 at 11:11:08AM -0700, Ferruh Yigit wrote:
> >>> On 10/23/2017 10:42 AM, Roger B. Melton wrote:
> >>>> On 10/20/17 3:04 PM, Ferruh Yigit wrote:
> >>>>> On 10/12/2017 10:24 AM, Roger B Melton wrote:
> >>>>>> When copying VLAN tags from the RX descriptor to the vlan_tci
> >>>>>> field in the mbuf header,  igb_rxtx.c:eth_igb_recv_pkts() and
> >>>>>> eth_igb_recv_scattered_pkts() both assume that the VLAN tag is
> >>>>>> always little endian.  While i350, i354 and /i350vf VLAN
> >>>>>> non-loopback packets are stored little endian, VLAN tags in
> >>>>>> loopback packets for those devices are big endian.
> >>>>>>
> >>>>>> For i350, i354 and i350vf VLAN loopback packets, swap the tag
> >>>>>> when copying from the RX descriptor to the mbuf header.  This
> >>>>>> will ensure that the mbuf vlan_tci is always little endian.
> >>>>>>
> >>>>>> Signed-off-by: Roger B Melton <rmelton@cisco.com>
> >>>>> <...>
> >>>>>
> >>>>>> @@ -946,9 +954,16 @@ eth_igb_recv_pkts(void *rx_queue, struct
> >>>>>> rte_mbuf **rx_pkts,
> >>>>>>
> >>>>>>    		rxm->hash.rss = rxd.wb.lower.hi_dword.rss;
> >>>>>>    		hlen_type_rss =
> rte_le_to_cpu_32(rxd.wb.lower.lo_dword.data);
> >>>>>> -		/* Only valid if PKT_RX_VLAN_PKT set in pkt_flags */
> >>>>>> -		rxm->vlan_tci = rte_le_to_cpu_16(rxd.wb.upper.vlan);
> >>>>>> -
> >>>>>> +		/*
> >>>>>> +		 * The vlan_tci field is only valid when PKT_RX_VLAN_PKT
> is
> >>>>>> +		 * set in the pkt_flags field and must be in CPU byte
> order.
> >>>>>> +		 */
> >>>>>> +		if ((staterr &
> rte_cpu_to_le_32(E1000_RXDEXT_STATERR_LB)) &&
> >>>>>> +			(rxq->flags & IGB_RXQ_FLAG_LB_BSWAP_VLAN)) {
> >>>>> This is adding more condition checks into Rx path.
> >>>>> What is the performance cost of this addition?
> >>>> I have not measured the performance cost, but I can collect data.
> >>>> What specifically are you looking for?
> >>>>
> >>>> To be clear the current implementation incorrect as it does not
> >>>> normalize the vlan tag to CPU byte order before copying it into
> >>>> mbuf and applications have no visibility to determine if the tag in
> >>>> the mbuf is big or little endian.
> >>>>
> >>>> Do you have any suggestions for an alternative approach to avoid rx
> >>>> patch checks?
> >>> No suggestion indeed. And correctness matters.
> >>>
> >>> But this add a cost and I wonder how much it is, based on that
> >>> result it may be possible to do more investigation for alternate
> solutions or trade-offs.
> >>>
> >>> Konstantin, Bruce, Wenzhuo,
> >>>
> >>> What do you think, do you have any comment?
> >>>
> >> For a 1G driver, is performance really that big an issue?
> > I don't know. So is this an Ack from you for the patch?

No, I don't know much about this driver to comment. But it's an indication that
I don't have any objections to it either. :-)

> 
> I can tell you that from the perspective of my application the performance
> impact for 1G is not a concern.

That's kinda what I would expect.

> 
> FWIW, I did go through a few iterations with Wenzhou to minimize the
> performance impact before we settled on this implementation, and Wenzhou
> did Ack it btw.
> 
> I'm hoping we can get this into 17.11.
> 
> Thanks,
> -Roger
> 
> >
> >> Unless you
> >> have a *lot* of 1G ports, I would expect most platforms not to notice
> >> an extra couple of cycles when dealing with 1G line rates.
> >>
> >> /Bruce
> >>
> > .
> >


  reply	other threads:[~2017-10-25 20:48 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-10-12 17:24 Roger B Melton
2017-10-16  0:43 ` Lu, Wenzhuo
2017-10-25 21:37   ` Ferruh Yigit
2017-10-20 19:04 ` Ferruh Yigit
2017-10-23 17:42   ` Roger B. Melton
2017-10-25 18:11     ` Ferruh Yigit
2017-10-25 20:16       ` Bruce Richardson
2017-10-25 20:22         ` Ferruh Yigit
2017-10-25 20:45           ` Roger B. Melton
2017-10-25 20:48             ` Richardson, Bruce [this message]
2017-10-25 21:11               ` Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=59AF69C657FD0841A61C55336867B5B0721ED1C2@IRSMSX103.ger.corp.intel.com \
    --to=bruce.richardson@intel.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=konstantin.ananyev@intel.com \
    --cc=rmelton@cisco.com \
    --cc=wenzhuo.lu@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).