DPDK usage discussions
 help / color / mirror / Atom feed
From: Andrew Theurer <atheurer@redhat.com>
To: "Wiles, Keith" <keith.wiles@intel.com>
Cc: Mauricio Valdueza <mvaldueza@vmware.com>,
	"users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] Any way to get more that 40Mpps with 64bytes using a XL710 40Gb nic
Date: Thu, 28 Sep 2017 21:40:41 -0500	[thread overview]
Message-ID: <CAD5-U7Rj5XtnAFgv4OGpYNk87urP3x6DMEU8SJ_EwmmuC3VGVw@mail.gmail.com> (raw)
In-Reply-To: <6613CB6E-EBE2-40AD-9A5C-AA67C1C833D0@intel.com>

In our tests, ~36Mpps is the maximum we can get.  We usually run a test
with TRex, bidirectional, 2 pci cards, 1 port per x8 gen3 PCI adapter, with
a device under test using same HW config but running testomd with 2 or more
queues per port.  Bidirectional aggregate traffic is in the 72Mpps range.
So, in that test, each active port is transmitting and receiving ~36Mpps,
however, I don't believe the received packets are DMA'd to memory, just
counted on the adapter.  I have never observed the Fortville doing higher
than that.

-Andrew

On Thu, Sep 28, 2017 at 3:59 PM, Wiles, Keith <keith.wiles@intel.com> wrote:

>
> > On Sep 28, 2017, at 6:06 AM, Mauricio Valdueza <mvaldueza@vmware.com>
> wrote:
> >
> > Hi Guys;
> >
> > I am testing a Fortville 40Gb nic with PKTgen
> >
> > I see linerate in 40Gb with 156B packet size, but once I decrease size,
> linerate is far away
>
> In Pktgen the packet count is taken from the hardware registers on the NIC
> and the bit rate is calculated using those values. Not all NICs flush the
> TX done queue and from one start command to the next the numbers can be off
> as the old packets are being recycled with the new size packets. Please try
> the different sizes and bring down pktgen between runs just to see if that
> is the problem.
>
> >
> > WITH 158B
> > Link State        :       <UP-40000-FD>     ----TotalRate----
> > Pkts/s Max/Rx     :                 0/0                   0/0
> >     Max/Tx     :   28090480/28089840     28090480/28089840
> > MBits/s Rx/Tx     :             0/40000               0/40000
> > ------------------------------------------------------------
> -----------------------------------
> >
> > WITH 128B
> > Link State        :       <UP-40000-FD>     ----TotalRate----
> > Pkts/s Max/Rx     :                 0/0                   0/0
> >     Max/Tx     :   33784179/33783908     33784179/33783908
> > MBits/s Rx/Tx     :             0/40000               0/40000
> > ------------------------------------------------------------
> ------------------------------------
> >
> > With 64B
> > Link State        :       <UP-40000-FD>     ----TotalRate----
> > Pkts/s Max/Rx     :                 0/0                   0/0
> >     Max/Tx     :   35944587/35941680     35944587/35941680
> > MBits/s Rx/Tx     :             0/24152               0/24152
> > ------------------------------------------------------------
> ----------------------------------
> >
> > Should I run any optimization?
> >
> > My environment is:
> >
> > •VMware ESXi version:           6.5.0, 4887370
> > •Exact NIC version:                 Intel Corporation XL710 for 40GbE
> QSFP+
> > •NIC driver version:                i40en version 1.3.1
> > •Server Vendor:                      Dell
> > •Server Make:                        Dell Inc. PowerEdge R730
> > •CPU Model:  I                        ntel(R) Xeon(R) CPU E5-2697 v3 @
> 2.60GHz
> > •Huge pages size:                   2M
> > •Test VM: What is it?             Ubuntu 16.04
> > • DPDK is compiled there?     dpdk-17.08
> > •Test traffic kind: IP/UDP?     Both tested
> > Traffic generator: Intel pktgen version?  pktgen-3.4.1
> >
> >
> > I am executing:
> >
> > sudo ./app/x86_64-native-linuxapp-gcc/pktgen -c 0xff n 3 --proc-type
> auto --socket-mem 9096 -- -m "[1:2-7].0"  --crc-strip
> >
> >
> > Thanks in advance
> >
> >
> > mauricio
> >
>
> Regards,
> Keith
>
>

  reply	other threads:[~2017-09-29  2:40 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-28 11:06 Mauricio Valdueza
2017-09-28 20:59 ` Wiles, Keith
2017-09-29  2:40   ` Andrew Theurer [this message]
2017-09-29  4:22     ` Wiles, Keith
2017-09-29 20:53       ` Mauricio Valdueza
2017-09-29 21:30         ` Stephen Hemminger
2017-10-02 11:48         ` Andrew Theurer
2017-10-09 16:02           ` Mauricio Valdueza

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAD5-U7Rj5XtnAFgv4OGpYNk87urP3x6DMEU8SJ_EwmmuC3VGVw@mail.gmail.com \
    --to=atheurer@redhat.com \
    --cc=keith.wiles@intel.com \
    --cc=mvaldueza@vmware.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).