DPDK patches and discussions
 help / color / mirror / Atom feed
From: Bruce Richardson <bruce.richardson@intel.com>
To: Vincent JARDIN <vincent.jardin@6wind.com>
Cc: John Fastabend <john.fastabend@gmail.com>,
	"Zhang, Qi Z" <qi.z.zhang@intel.com>,
	"Xing, Beilei" <beilei.xing@intel.com>,
	"dev@dpdk.org" <dev@dpdk.org>,
	"Zhang, Helin" <helin.zhang@intel.com>,
	"Yigit, Ferruh" <ferruh.yigit@intel.com>
Subject: Re: [dpdk-dev] [PATCH v2 0/2] AVX2 Vectorized Rx/Tx functions for i40e
Date: Wed, 10 Jan 2018 14:38:57 +0000	[thread overview]
Message-ID: <20180110143857.GA12784@bricha3-MOBL3.ger.corp.intel.com> (raw)
In-Reply-To: <cb6a0c2a-e305-a3e6-5bb5-c7e7e383f917@6wind.com>

On Wed, Jan 10, 2018 at 03:25:23PM +0100, Vincent JARDIN wrote:
> Le 10/01/2018 à 10:27, Richardson, Bruce a écrit :
> > > Hi Bruce,
> > > 
> > > Just curious, can you provide some hints on percent increase in at least
> > > some representative cases? I'm just trying to get a sense of if this is
> > > %5, 10%, 20%, more... I know mileage will vary depending on system, setup,
> > > configuration, etc.
> > > 
> > Best case conditions to test under are using testpmd as that is where any IO improvement will be most seen. As a ballpark figure though, on my system while testing testpmd with both 16B and 32B descriptors, (RX/TX ring sizes 1024/512) I saw ~15% performance increase, and sometimes quite a bit higher, e.g. when testing with 16B descriptors with larger burst sizes.
> 
> Hi Bruce,
> 
> Then, about the next limit after this performance increase: is it the
> board/Mpps capacity/PCI bus? If so, you should see that CPU usage on
> testpmd's cores to be decreased. Can you be more explicit about it?
> 

Hi Vincent,

again it really depends on your setup. In my case I was using 2 NICs
with 1x40G ports each, and each one using a PCI Gen3 x8 connection to
CPU. I chose this particular setup because there is sufficient NIC
capacity and PCI bandwidth available that for 64-byte packet sizes,
there will be more IO available than a single core can handle. This
patchset basically reduces the cycles needed for a core to process each
packet, so in cases where the core is the bottleneck you will get
improved performance. For other cases where PCI or NIC capability is the
issue this patch almost certainly won't help, as there are no changes to
the way in which the NIC descriptor ring is used, e.g. no changes to
descriptor write-back over PCI etc.

> What's about other packet size like 66 bytes? 122 bytes? which are not
> aligned on 64 bytes.
> 
Sorry, I don't have comparison data for that to share.

/Bruce

  reply	other threads:[~2018-01-10 14:39 UTC|newest]

Thread overview: 17+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-11-23 16:53 [dpdk-dev] [PATCH " Bruce Richardson
2017-11-23 16:53 ` [dpdk-dev] [PATCH 1/2] net/i40e: add AVX2 Tx function Bruce Richardson
2017-11-29  2:13   ` Ferruh Yigit
2017-11-29 10:02     ` Bruce Richardson
2017-11-23 16:53 ` [dpdk-dev] [PATCH 2/2] net/i40e: add AVX2 Rx function Bruce Richardson
2017-11-23 16:56 ` [dpdk-dev] [PATCH 0/2] AVX2 Vectorized Rx/Tx functions for i40e Bruce Richardson
2017-11-27 10:45 ` Zhang, Qi Z
2018-01-09 14:32 ` [dpdk-dev] [PATCH v2 " Bruce Richardson
2018-01-09 14:32   ` [dpdk-dev] [PATCH v2 1/2] net/i40e: add AVX2 Tx function Bruce Richardson
2018-01-09 14:32   ` [dpdk-dev] [PATCH v2 2/2] net/i40e: add AVX2 Rx function Bruce Richardson
2018-01-09 16:30   ` [dpdk-dev] [PATCH v2 0/2] AVX2 Vectorized Rx/Tx functions for i40e John Fastabend
2018-01-10  9:27     ` Richardson, Bruce
2018-01-10 14:25       ` Vincent JARDIN
2018-01-10 14:38         ` Bruce Richardson [this message]
2018-01-10  7:11   ` Li, Xiaoyun
2018-01-10  7:14   ` Zhang, Qi Z
2018-01-10 13:49     ` Zhang, Helin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180110143857.GA12784@bricha3-MOBL3.ger.corp.intel.com \
    --to=bruce.richardson@intel.com \
    --cc=beilei.xing@intel.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=helin.zhang@intel.com \
    --cc=john.fastabend@gmail.com \
    --cc=qi.z.zhang@intel.com \
    --cc=vincent.jardin@6wind.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).