DPDK usage discussions
 help / color / mirror / Atom feed
From: Andrew Theurer <atheurer@redhat.com>
To: Mauricio Valdueza <mvaldueza@vmware.com>
Cc: "Wiles, Keith" <keith.wiles@intel.com>,
	"users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] Any way to get more that 40Mpps with 64bytes using a XL710 40Gb nic
Date: Mon, 2 Oct 2017 06:48:16 -0500	[thread overview]
Message-ID: <CAD5-U7Sv325pAxccC43uEk10=V9oC6eha-Z2wTctsjSRYji7Jg@mail.gmail.com> (raw)
In-Reply-To: <058CF6F5-F30E-41FA-AAE0-6EEA4BAE315D@vmware.com>

On Fri, Sep 29, 2017 at 3:53 PM, Mauricio Valdueza <mvaldueza@vmware.com>
wrote:

> Hi Guys
>
> Max theoretical value is 56.8 Mpps… but practical PCie limitations allow
> us to reach 42Mpps
>

​Which PCI limitation?​

>
> I am reaching 36Mpps, so where are the 6Mpps lost ( ?
>

​Does your hypervisor use 1GB pages for the VM memory?​

>
> Mau
>
> On 29/09/2017, 06:22, "Wiles, Keith" <keith.wiles@intel.com> wrote:
>
>
>     > On Sep 28, 2017, at 9:40 PM, Andrew Theurer <atheurer@redhat.com>
> wrote:
>     >
>     > In our tests, ~36Mpps is the maximum we can get.  We usually run a
> test with TRex, bidirectional, 2 pci cards, 1 port per x8 gen3 PCI adapter,
> with a device under test using same HW config but running testomd with 2 or
> more queues per port.  Bidirectional aggregate traffic is in the 72Mpps
> range.  So, in that test, each active port is transmitting and receiving
> ~36Mpps, however, I don't believe the received packets are DMA'd to memory,
> just counted on the adapter.  I have never observed the Fortville doing
> higher than that.
>
>     40Gbits is the limit and I think 36Mpps is the MAX for the PCI I
> think, if I remember correctly. The t-rex must be counting differently as
> you stated. I need to ask some folks here.
>
>     I have two 40G NICs, but at this time I do not have enough slots to
> put in the other 40 and keep my 10Gs in the system.
>
>     I need to fix the problem below, but have not had the chance.
>
>     >
>     > -Andrew
>     >
>     > On Thu, Sep 28, 2017 at 3:59 PM, Wiles, Keith <keith.wiles@intel.com>
> wrote:
>     >
>     > > On Sep 28, 2017, at 6:06 AM, Mauricio Valdueza <
> mvaldueza@vmware.com> wrote:
>     > >
>     > > Hi Guys;
>     > >
>     > > I am testing a Fortville 40Gb nic with PKTgen
>     > >
>     > > I see linerate in 40Gb with 156B packet size, but once I decrease
> size, linerate is far away
>     >
>     > In Pktgen the packet count is taken from the hardware registers on
> the NIC and the bit rate is calculated using those values. Not all NICs
> flush the TX done queue and from one start command to the next the numbers
> can be off as the old packets are being recycled with the new size packets.
> Please try the different sizes and bring down pktgen between runs just to
> see if that is the problem.
>     >
>     > >
>     > > WITH 158B
>     > > Link State        :       <UP-40000-FD>     ----TotalRate----
>     > > Pkts/s Max/Rx     :                 0/0                   0/0
>     > >     Max/Tx     :   28090480/28089840     28090480/28089840
>     > > MBits/s Rx/Tx     :             0/40000               0/40000
>     > > ------------------------------------------------------------
> -----------------------------------
>     > >
>     > > WITH 128B
>     > > Link State        :       <UP-40000-FD>     ----TotalRate----
>     > > Pkts/s Max/Rx     :                 0/0                   0/0
>     > >     Max/Tx     :   33784179/33783908     33784179/33783908
>     > > MBits/s Rx/Tx     :             0/40000               0/40000
>     > > ------------------------------------------------------------
> ------------------------------------
>     > >
>     > > With 64B
>     > > Link State        :       <UP-40000-FD>     ----TotalRate----
>     > > Pkts/s Max/Rx     :                 0/0                   0/0
>     > >     Max/Tx     :   35944587/35941680     35944587/35941680
>     > > MBits/s Rx/Tx     :             0/24152               0/24152
>     > > ------------------------------------------------------------
> ----------------------------------
>     > >
>     > > Should I run any optimization?
>     > >
>     > > My environment is:
>     > >
>     > > •VMware ESXi version:           6.5.0, 4887370
>     > > •Exact NIC version:                 Intel Corporation XL710 for
> 40GbE QSFP+
>     > > •NIC driver version:                i40en version 1.3.1
>     > > •Server Vendor:                      Dell
>     > > •Server Make:                        Dell Inc. PowerEdge R730
>     > > •CPU Model:  I                        ntel(R) Xeon(R) CPU E5-2697
> v3 @ 2.60GHz
>     > > •Huge pages size:                   2M
>     > > •Test VM: What is it?             Ubuntu 16.04
>     > > • DPDK is compiled there?     dpdk-17.08
>     > > •Test traffic kind: IP/UDP?     Both tested
>     > > Traffic generator: Intel pktgen version?  pktgen-3.4.1
>     > >
>     > >
>     > > I am executing:
>     > >
>     > > sudo ./app/x86_64-native-linuxapp-gcc/pktgen -c 0xff n 3
> --proc-type auto --socket-mem 9096 -- -m "[1:2-7].0"  --crc-strip
>     > >
>     > >
>     > > Thanks in advance
>     > >
>     > >
>     > > mauricio
>     > >
>     >
>     > Regards,
>     > Keith
>     >
>     >
>
>     Regards,
>     Keith
>
>
>
>

  parent reply	other threads:[~2017-10-02 11:48 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-09-28 11:06 Mauricio Valdueza
2017-09-28 20:59 ` Wiles, Keith
2017-09-29  2:40   ` Andrew Theurer
2017-09-29  4:22     ` Wiles, Keith
2017-09-29 20:53       ` Mauricio Valdueza
2017-09-29 21:30         ` Stephen Hemminger
2017-10-02 11:48         ` Andrew Theurer [this message]
2017-10-09 16:02           ` Mauricio Valdueza

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAD5-U7Sv325pAxccC43uEk10=V9oC6eha-Z2wTctsjSRYji7Jg@mail.gmail.com' \
    --to=atheurer@redhat.com \
    --cc=keith.wiles@intel.com \
    --cc=mvaldueza@vmware.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).