* [dpdk-users] Any way to get more that 40Mpps with 64bytes using a XL710 40Gb nic
@ 2017-09-28 11:06 Mauricio Valdueza
2017-09-28 20:59 ` Wiles, Keith
0 siblings, 1 reply; 8+ messages in thread
From: Mauricio Valdueza @ 2017-09-28 11:06 UTC (permalink / raw)
To: users
Hi Guys;
I am testing a Fortville 40Gb nic with PKTgen
I see linerate in 40Gb with 156B packet size, but once I decrease size, linerate is far away
WITH 158B
Link State : <UP-40000-FD> ----TotalRate----
Pkts/s Max/Rx : 0/0 0/0
Max/Tx : 28090480/28089840 28090480/28089840
MBits/s Rx/Tx : 0/40000 0/40000
-----------------------------------------------------------------------------------------------
WITH 128B
Link State : <UP-40000-FD> ----TotalRate----
Pkts/s Max/Rx : 0/0 0/0
Max/Tx : 33784179/33783908 33784179/33783908
MBits/s Rx/Tx : 0/40000 0/40000
------------------------------------------------------------------------------------------------
With 64B
Link State : <UP-40000-FD> ----TotalRate----
Pkts/s Max/Rx : 0/0 0/0
Max/Tx : 35944587/35941680 35944587/35941680
MBits/s Rx/Tx : 0/24152 0/24152
----------------------------------------------------------------------------------------------
Should I run any optimization?
My environment is:
•VMware ESXi version: 6.5.0, 4887370
•Exact NIC version: Intel Corporation XL710 for 40GbE QSFP+
•NIC driver version: i40en version 1.3.1
•Server Vendor: Dell
•Server Make: Dell Inc. PowerEdge R730
•CPU Model: I ntel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
•Huge pages size: 2M
•Test VM: What is it? Ubuntu 16.04
• DPDK is compiled there? dpdk-17.08
•Test traffic kind: IP/UDP? Both tested
Traffic generator: Intel pktgen version? pktgen-3.4.1
I am executing:
sudo ./app/x86_64-native-linuxapp-gcc/pktgen -c 0xff n 3 --proc-type auto --socket-mem 9096 -- -m "[1:2-7].0" --crc-strip
Thanks in advance
mauricio
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-users] Any way to get more that 40Mpps with 64bytes using a XL710 40Gb nic
2017-09-28 11:06 [dpdk-users] Any way to get more that 40Mpps with 64bytes using a XL710 40Gb nic Mauricio Valdueza
@ 2017-09-28 20:59 ` Wiles, Keith
2017-09-29 2:40 ` Andrew Theurer
0 siblings, 1 reply; 8+ messages in thread
From: Wiles, Keith @ 2017-09-28 20:59 UTC (permalink / raw)
To: Mauricio Valdueza; +Cc: users
> On Sep 28, 2017, at 6:06 AM, Mauricio Valdueza <mvaldueza@vmware.com> wrote:
>
> Hi Guys;
>
> I am testing a Fortville 40Gb nic with PKTgen
>
> I see linerate in 40Gb with 156B packet size, but once I decrease size, linerate is far away
In Pktgen the packet count is taken from the hardware registers on the NIC and the bit rate is calculated using those values. Not all NICs flush the TX done queue and from one start command to the next the numbers can be off as the old packets are being recycled with the new size packets. Please try the different sizes and bring down pktgen between runs just to see if that is the problem.
>
> WITH 158B
> Link State : <UP-40000-FD> ----TotalRate----
> Pkts/s Max/Rx : 0/0 0/0
> Max/Tx : 28090480/28089840 28090480/28089840
> MBits/s Rx/Tx : 0/40000 0/40000
> -----------------------------------------------------------------------------------------------
>
> WITH 128B
> Link State : <UP-40000-FD> ----TotalRate----
> Pkts/s Max/Rx : 0/0 0/0
> Max/Tx : 33784179/33783908 33784179/33783908
> MBits/s Rx/Tx : 0/40000 0/40000
> ------------------------------------------------------------------------------------------------
>
> With 64B
> Link State : <UP-40000-FD> ----TotalRate----
> Pkts/s Max/Rx : 0/0 0/0
> Max/Tx : 35944587/35941680 35944587/35941680
> MBits/s Rx/Tx : 0/24152 0/24152
> ----------------------------------------------------------------------------------------------
>
> Should I run any optimization?
>
> My environment is:
>
> •VMware ESXi version: 6.5.0, 4887370
> •Exact NIC version: Intel Corporation XL710 for 40GbE QSFP+
> •NIC driver version: i40en version 1.3.1
> •Server Vendor: Dell
> •Server Make: Dell Inc. PowerEdge R730
> •CPU Model: I ntel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
> •Huge pages size: 2M
> •Test VM: What is it? Ubuntu 16.04
> • DPDK is compiled there? dpdk-17.08
> •Test traffic kind: IP/UDP? Both tested
> Traffic generator: Intel pktgen version? pktgen-3.4.1
>
>
> I am executing:
>
> sudo ./app/x86_64-native-linuxapp-gcc/pktgen -c 0xff n 3 --proc-type auto --socket-mem 9096 -- -m "[1:2-7].0" --crc-strip
>
>
> Thanks in advance
>
>
> mauricio
>
Regards,
Keith
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-users] Any way to get more that 40Mpps with 64bytes using a XL710 40Gb nic
2017-09-28 20:59 ` Wiles, Keith
@ 2017-09-29 2:40 ` Andrew Theurer
2017-09-29 4:22 ` Wiles, Keith
0 siblings, 1 reply; 8+ messages in thread
From: Andrew Theurer @ 2017-09-29 2:40 UTC (permalink / raw)
To: Wiles, Keith; +Cc: Mauricio Valdueza, users
In our tests, ~36Mpps is the maximum we can get. We usually run a test
with TRex, bidirectional, 2 pci cards, 1 port per x8 gen3 PCI adapter, with
a device under test using same HW config but running testomd with 2 or more
queues per port. Bidirectional aggregate traffic is in the 72Mpps range.
So, in that test, each active port is transmitting and receiving ~36Mpps,
however, I don't believe the received packets are DMA'd to memory, just
counted on the adapter. I have never observed the Fortville doing higher
than that.
-Andrew
On Thu, Sep 28, 2017 at 3:59 PM, Wiles, Keith <keith.wiles@intel.com> wrote:
>
> > On Sep 28, 2017, at 6:06 AM, Mauricio Valdueza <mvaldueza@vmware.com>
> wrote:
> >
> > Hi Guys;
> >
> > I am testing a Fortville 40Gb nic with PKTgen
> >
> > I see linerate in 40Gb with 156B packet size, but once I decrease size,
> linerate is far away
>
> In Pktgen the packet count is taken from the hardware registers on the NIC
> and the bit rate is calculated using those values. Not all NICs flush the
> TX done queue and from one start command to the next the numbers can be off
> as the old packets are being recycled with the new size packets. Please try
> the different sizes and bring down pktgen between runs just to see if that
> is the problem.
>
> >
> > WITH 158B
> > Link State : <UP-40000-FD> ----TotalRate----
> > Pkts/s Max/Rx : 0/0 0/0
> > Max/Tx : 28090480/28089840 28090480/28089840
> > MBits/s Rx/Tx : 0/40000 0/40000
> > ------------------------------------------------------------
> -----------------------------------
> >
> > WITH 128B
> > Link State : <UP-40000-FD> ----TotalRate----
> > Pkts/s Max/Rx : 0/0 0/0
> > Max/Tx : 33784179/33783908 33784179/33783908
> > MBits/s Rx/Tx : 0/40000 0/40000
> > ------------------------------------------------------------
> ------------------------------------
> >
> > With 64B
> > Link State : <UP-40000-FD> ----TotalRate----
> > Pkts/s Max/Rx : 0/0 0/0
> > Max/Tx : 35944587/35941680 35944587/35941680
> > MBits/s Rx/Tx : 0/24152 0/24152
> > ------------------------------------------------------------
> ----------------------------------
> >
> > Should I run any optimization?
> >
> > My environment is:
> >
> > •VMware ESXi version: 6.5.0, 4887370
> > •Exact NIC version: Intel Corporation XL710 for 40GbE
> QSFP+
> > •NIC driver version: i40en version 1.3.1
> > •Server Vendor: Dell
> > •Server Make: Dell Inc. PowerEdge R730
> > •CPU Model: I ntel(R) Xeon(R) CPU E5-2697 v3 @
> 2.60GHz
> > •Huge pages size: 2M
> > •Test VM: What is it? Ubuntu 16.04
> > • DPDK is compiled there? dpdk-17.08
> > •Test traffic kind: IP/UDP? Both tested
> > Traffic generator: Intel pktgen version? pktgen-3.4.1
> >
> >
> > I am executing:
> >
> > sudo ./app/x86_64-native-linuxapp-gcc/pktgen -c 0xff n 3 --proc-type
> auto --socket-mem 9096 -- -m "[1:2-7].0" --crc-strip
> >
> >
> > Thanks in advance
> >
> >
> > mauricio
> >
>
> Regards,
> Keith
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-users] Any way to get more that 40Mpps with 64bytes using a XL710 40Gb nic
2017-09-29 2:40 ` Andrew Theurer
@ 2017-09-29 4:22 ` Wiles, Keith
2017-09-29 20:53 ` Mauricio Valdueza
0 siblings, 1 reply; 8+ messages in thread
From: Wiles, Keith @ 2017-09-29 4:22 UTC (permalink / raw)
To: Andrew Theurer; +Cc: Mauricio Valdueza, users
> On Sep 28, 2017, at 9:40 PM, Andrew Theurer <atheurer@redhat.com> wrote:
>
> In our tests, ~36Mpps is the maximum we can get. We usually run a test with TRex, bidirectional, 2 pci cards, 1 port per x8 gen3 PCI adapter, with a device under test using same HW config but running testomd with 2 or more queues per port. Bidirectional aggregate traffic is in the 72Mpps range. So, in that test, each active port is transmitting and receiving ~36Mpps, however, I don't believe the received packets are DMA'd to memory, just counted on the adapter. I have never observed the Fortville doing higher than that.
40Gbits is the limit and I think 36Mpps is the MAX for the PCI I think, if I remember correctly. The t-rex must be counting differently as you stated. I need to ask some folks here.
I have two 40G NICs, but at this time I do not have enough slots to put in the other 40 and keep my 10Gs in the system.
I need to fix the problem below, but have not had the chance.
>
> -Andrew
>
> On Thu, Sep 28, 2017 at 3:59 PM, Wiles, Keith <keith.wiles@intel.com> wrote:
>
> > On Sep 28, 2017, at 6:06 AM, Mauricio Valdueza <mvaldueza@vmware.com> wrote:
> >
> > Hi Guys;
> >
> > I am testing a Fortville 40Gb nic with PKTgen
> >
> > I see linerate in 40Gb with 156B packet size, but once I decrease size, linerate is far away
>
> In Pktgen the packet count is taken from the hardware registers on the NIC and the bit rate is calculated using those values. Not all NICs flush the TX done queue and from one start command to the next the numbers can be off as the old packets are being recycled with the new size packets. Please try the different sizes and bring down pktgen between runs just to see if that is the problem.
>
> >
> > WITH 158B
> > Link State : <UP-40000-FD> ----TotalRate----
> > Pkts/s Max/Rx : 0/0 0/0
> > Max/Tx : 28090480/28089840 28090480/28089840
> > MBits/s Rx/Tx : 0/40000 0/40000
> > -----------------------------------------------------------------------------------------------
> >
> > WITH 128B
> > Link State : <UP-40000-FD> ----TotalRate----
> > Pkts/s Max/Rx : 0/0 0/0
> > Max/Tx : 33784179/33783908 33784179/33783908
> > MBits/s Rx/Tx : 0/40000 0/40000
> > ------------------------------------------------------------------------------------------------
> >
> > With 64B
> > Link State : <UP-40000-FD> ----TotalRate----
> > Pkts/s Max/Rx : 0/0 0/0
> > Max/Tx : 35944587/35941680 35944587/35941680
> > MBits/s Rx/Tx : 0/24152 0/24152
> > ----------------------------------------------------------------------------------------------
> >
> > Should I run any optimization?
> >
> > My environment is:
> >
> > •VMware ESXi version: 6.5.0, 4887370
> > •Exact NIC version: Intel Corporation XL710 for 40GbE QSFP+
> > •NIC driver version: i40en version 1.3.1
> > •Server Vendor: Dell
> > •Server Make: Dell Inc. PowerEdge R730
> > •CPU Model: I ntel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
> > •Huge pages size: 2M
> > •Test VM: What is it? Ubuntu 16.04
> > • DPDK is compiled there? dpdk-17.08
> > •Test traffic kind: IP/UDP? Both tested
> > Traffic generator: Intel pktgen version? pktgen-3.4.1
> >
> >
> > I am executing:
> >
> > sudo ./app/x86_64-native-linuxapp-gcc/pktgen -c 0xff n 3 --proc-type auto --socket-mem 9096 -- -m "[1:2-7].0" --crc-strip
> >
> >
> > Thanks in advance
> >
> >
> > mauricio
> >
>
> Regards,
> Keith
>
>
Regards,
Keith
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-users] Any way to get more that 40Mpps with 64bytes using a XL710 40Gb nic
2017-09-29 4:22 ` Wiles, Keith
@ 2017-09-29 20:53 ` Mauricio Valdueza
2017-09-29 21:30 ` Stephen Hemminger
2017-10-02 11:48 ` Andrew Theurer
0 siblings, 2 replies; 8+ messages in thread
From: Mauricio Valdueza @ 2017-09-29 20:53 UTC (permalink / raw)
To: Wiles, Keith, Andrew Theurer; +Cc: users
Hi Guys
Max theoretical value is 56.8 Mpps… but practical PCie limitations allow us to reach 42Mpps
I am reaching 36Mpps, so where are the 6Mpps lost ( ?
Mau
On 29/09/2017, 06:22, "Wiles, Keith" <keith.wiles@intel.com> wrote:
> On Sep 28, 2017, at 9:40 PM, Andrew Theurer <atheurer@redhat.com> wrote:
>
> In our tests, ~36Mpps is the maximum we can get. We usually run a test with TRex, bidirectional, 2 pci cards, 1 port per x8 gen3 PCI adapter, with a device under test using same HW config but running testomd with 2 or more queues per port. Bidirectional aggregate traffic is in the 72Mpps range. So, in that test, each active port is transmitting and receiving ~36Mpps, however, I don't believe the received packets are DMA'd to memory, just counted on the adapter. I have never observed the Fortville doing higher than that.
40Gbits is the limit and I think 36Mpps is the MAX for the PCI I think, if I remember correctly. The t-rex must be counting differently as you stated. I need to ask some folks here.
I have two 40G NICs, but at this time I do not have enough slots to put in the other 40 and keep my 10Gs in the system.
I need to fix the problem below, but have not had the chance.
>
> -Andrew
>
> On Thu, Sep 28, 2017 at 3:59 PM, Wiles, Keith <keith.wiles@intel.com> wrote:
>
> > On Sep 28, 2017, at 6:06 AM, Mauricio Valdueza <mvaldueza@vmware.com> wrote:
> >
> > Hi Guys;
> >
> > I am testing a Fortville 40Gb nic with PKTgen
> >
> > I see linerate in 40Gb with 156B packet size, but once I decrease size, linerate is far away
>
> In Pktgen the packet count is taken from the hardware registers on the NIC and the bit rate is calculated using those values. Not all NICs flush the TX done queue and from one start command to the next the numbers can be off as the old packets are being recycled with the new size packets. Please try the different sizes and bring down pktgen between runs just to see if that is the problem.
>
> >
> > WITH 158B
> > Link State : <UP-40000-FD> ----TotalRate----
> > Pkts/s Max/Rx : 0/0 0/0
> > Max/Tx : 28090480/28089840 28090480/28089840
> > MBits/s Rx/Tx : 0/40000 0/40000
> > -----------------------------------------------------------------------------------------------
> >
> > WITH 128B
> > Link State : <UP-40000-FD> ----TotalRate----
> > Pkts/s Max/Rx : 0/0 0/0
> > Max/Tx : 33784179/33783908 33784179/33783908
> > MBits/s Rx/Tx : 0/40000 0/40000
> > ------------------------------------------------------------------------------------------------
> >
> > With 64B
> > Link State : <UP-40000-FD> ----TotalRate----
> > Pkts/s Max/Rx : 0/0 0/0
> > Max/Tx : 35944587/35941680 35944587/35941680
> > MBits/s Rx/Tx : 0/24152 0/24152
> > ----------------------------------------------------------------------------------------------
> >
> > Should I run any optimization?
> >
> > My environment is:
> >
> > •VMware ESXi version: 6.5.0, 4887370
> > •Exact NIC version: Intel Corporation XL710 for 40GbE QSFP+
> > •NIC driver version: i40en version 1.3.1
> > •Server Vendor: Dell
> > •Server Make: Dell Inc. PowerEdge R730
> > •CPU Model: I ntel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
> > •Huge pages size: 2M
> > •Test VM: What is it? Ubuntu 16.04
> > • DPDK is compiled there? dpdk-17.08
> > •Test traffic kind: IP/UDP? Both tested
> > Traffic generator: Intel pktgen version? pktgen-3.4.1
> >
> >
> > I am executing:
> >
> > sudo ./app/x86_64-native-linuxapp-gcc/pktgen -c 0xff n 3 --proc-type auto --socket-mem 9096 -- -m "[1:2-7].0" --crc-strip
> >
> >
> > Thanks in advance
> >
> >
> > mauricio
> >
>
> Regards,
> Keith
>
>
Regards,
Keith
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-users] Any way to get more that 40Mpps with 64bytes using a XL710 40Gb nic
2017-09-29 20:53 ` Mauricio Valdueza
@ 2017-09-29 21:30 ` Stephen Hemminger
2017-10-02 11:48 ` Andrew Theurer
1 sibling, 0 replies; 8+ messages in thread
From: Stephen Hemminger @ 2017-09-29 21:30 UTC (permalink / raw)
To: Mauricio Valdueza; +Cc: Wiles, Keith, Andrew Theurer, users
On Fri, 29 Sep 2017 20:53:10 +0000
Mauricio Valdueza <mvaldueza@vmware.com> wrote:
> Hi Guys
>
> Max theoretical value is 56.8 Mpps… but practical PCie limitations allow us to reach 42Mpps
>
> I am reaching 36Mpps, so where are the 6Mpps lost ( ?
>
Firmware?
PCI x16?
PCI also has per-transaction overhead?
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-users] Any way to get more that 40Mpps with 64bytes using a XL710 40Gb nic
2017-09-29 20:53 ` Mauricio Valdueza
2017-09-29 21:30 ` Stephen Hemminger
@ 2017-10-02 11:48 ` Andrew Theurer
2017-10-09 16:02 ` Mauricio Valdueza
1 sibling, 1 reply; 8+ messages in thread
From: Andrew Theurer @ 2017-10-02 11:48 UTC (permalink / raw)
To: Mauricio Valdueza; +Cc: Wiles, Keith, users
On Fri, Sep 29, 2017 at 3:53 PM, Mauricio Valdueza <mvaldueza@vmware.com>
wrote:
> Hi Guys
>
> Max theoretical value is 56.8 Mpps… but practical PCie limitations allow
> us to reach 42Mpps
>
Which PCI limitation?
>
> I am reaching 36Mpps, so where are the 6Mpps lost ( ?
>
Does your hypervisor use 1GB pages for the VM memory?
>
> Mau
>
> On 29/09/2017, 06:22, "Wiles, Keith" <keith.wiles@intel.com> wrote:
>
>
> > On Sep 28, 2017, at 9:40 PM, Andrew Theurer <atheurer@redhat.com>
> wrote:
> >
> > In our tests, ~36Mpps is the maximum we can get. We usually run a
> test with TRex, bidirectional, 2 pci cards, 1 port per x8 gen3 PCI adapter,
> with a device under test using same HW config but running testomd with 2 or
> more queues per port. Bidirectional aggregate traffic is in the 72Mpps
> range. So, in that test, each active port is transmitting and receiving
> ~36Mpps, however, I don't believe the received packets are DMA'd to memory,
> just counted on the adapter. I have never observed the Fortville doing
> higher than that.
>
> 40Gbits is the limit and I think 36Mpps is the MAX for the PCI I
> think, if I remember correctly. The t-rex must be counting differently as
> you stated. I need to ask some folks here.
>
> I have two 40G NICs, but at this time I do not have enough slots to
> put in the other 40 and keep my 10Gs in the system.
>
> I need to fix the problem below, but have not had the chance.
>
> >
> > -Andrew
> >
> > On Thu, Sep 28, 2017 at 3:59 PM, Wiles, Keith <keith.wiles@intel.com>
> wrote:
> >
> > > On Sep 28, 2017, at 6:06 AM, Mauricio Valdueza <
> mvaldueza@vmware.com> wrote:
> > >
> > > Hi Guys;
> > >
> > > I am testing a Fortville 40Gb nic with PKTgen
> > >
> > > I see linerate in 40Gb with 156B packet size, but once I decrease
> size, linerate is far away
> >
> > In Pktgen the packet count is taken from the hardware registers on
> the NIC and the bit rate is calculated using those values. Not all NICs
> flush the TX done queue and from one start command to the next the numbers
> can be off as the old packets are being recycled with the new size packets.
> Please try the different sizes and bring down pktgen between runs just to
> see if that is the problem.
> >
> > >
> > > WITH 158B
> > > Link State : <UP-40000-FD> ----TotalRate----
> > > Pkts/s Max/Rx : 0/0 0/0
> > > Max/Tx : 28090480/28089840 28090480/28089840
> > > MBits/s Rx/Tx : 0/40000 0/40000
> > > ------------------------------------------------------------
> -----------------------------------
> > >
> > > WITH 128B
> > > Link State : <UP-40000-FD> ----TotalRate----
> > > Pkts/s Max/Rx : 0/0 0/0
> > > Max/Tx : 33784179/33783908 33784179/33783908
> > > MBits/s Rx/Tx : 0/40000 0/40000
> > > ------------------------------------------------------------
> ------------------------------------
> > >
> > > With 64B
> > > Link State : <UP-40000-FD> ----TotalRate----
> > > Pkts/s Max/Rx : 0/0 0/0
> > > Max/Tx : 35944587/35941680 35944587/35941680
> > > MBits/s Rx/Tx : 0/24152 0/24152
> > > ------------------------------------------------------------
> ----------------------------------
> > >
> > > Should I run any optimization?
> > >
> > > My environment is:
> > >
> > > •VMware ESXi version: 6.5.0, 4887370
> > > •Exact NIC version: Intel Corporation XL710 for
> 40GbE QSFP+
> > > •NIC driver version: i40en version 1.3.1
> > > •Server Vendor: Dell
> > > •Server Make: Dell Inc. PowerEdge R730
> > > •CPU Model: I ntel(R) Xeon(R) CPU E5-2697
> v3 @ 2.60GHz
> > > •Huge pages size: 2M
> > > •Test VM: What is it? Ubuntu 16.04
> > > • DPDK is compiled there? dpdk-17.08
> > > •Test traffic kind: IP/UDP? Both tested
> > > Traffic generator: Intel pktgen version? pktgen-3.4.1
> > >
> > >
> > > I am executing:
> > >
> > > sudo ./app/x86_64-native-linuxapp-gcc/pktgen -c 0xff n 3
> --proc-type auto --socket-mem 9096 -- -m "[1:2-7].0" --crc-strip
> > >
> > >
> > > Thanks in advance
> > >
> > >
> > > mauricio
> > >
> >
> > Regards,
> > Keith
> >
> >
>
> Regards,
> Keith
>
>
>
>
^ permalink raw reply [flat|nested] 8+ messages in thread
* Re: [dpdk-users] Any way to get more that 40Mpps with 64bytes using a XL710 40Gb nic
2017-10-02 11:48 ` Andrew Theurer
@ 2017-10-09 16:02 ` Mauricio Valdueza
0 siblings, 0 replies; 8+ messages in thread
From: Mauricio Valdueza @ 2017-10-09 16:02 UTC (permalink / raw)
To: users
HI Andrew;
I tuned my VM and now I am reaching 41.9 Mpps. Regarding your questions:
Which PCI limitation?
i.e.: Seccion 5.4 at https://www.net.in.tum.de/fileadmin/bibtex/publications/papers/MoonGen_IMC2015.pdf
Does your hypervisor use 1GB pages for the VM memory?
I used both 2MB and 1GB but not differences between both sizes
Mau
From: Andrew Theurer <atheurer@redhat.com>
Date: Monday, 2 October 2017 at 13:48
To: Mauricio Valdueza <mvaldueza@vmware.com>
Cc: "Wiles, Keith" <keith.wiles@intel.com>, "users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] Any way to get more that 40Mpps with 64bytes using a XL710 40Gb nic
On Fri, Sep 29, 2017 at 3:53 PM, Mauricio Valdueza <mvaldueza@vmware.com<mailto:mvaldueza@vmware.com>> wrote:
Hi Guys
Max theoretical value is 56.8 Mpps… but practical PCie limitations allow us to reach 42Mpps
Which PCI limitation?
I am reaching 36Mpps, so where are the 6Mpps lost ( ?
Does your hypervisor use 1GB pages for the VM memory?
Mau
On 29/09/2017, 06:22, "Wiles, Keith" <keith.wiles@intel.com<mailto:keith.wiles@intel.com>> wrote:
> On Sep 28, 2017, at 9:40 PM, Andrew Theurer <atheurer@redhat.com<mailto:atheurer@redhat.com>> wrote:
>
> In our tests, ~36Mpps is the maximum we can get. We usually run a test with TRex, bidirectional, 2 pci cards, 1 port per x8 gen3 PCI adapter, with a device under test using same HW config but running testomd with 2 or more queues per port. Bidirectional aggregate traffic is in the 72Mpps range. So, in that test, each active port is transmitting and receiving ~36Mpps, however, I don't believe the received packets are DMA'd to memory, just counted on the adapter. I have never observed the Fortville doing higher than that.
40Gbits is the limit and I think 36Mpps is the MAX for the PCI I think, if I remember correctly. The t-rex must be counting differently as you stated. I need to ask some folks here.
I have two 40G NICs, but at this time I do not have enough slots to put in the other 40 and keep my 10Gs in the system.
I need to fix the problem below, but have not had the chance.
>
> -Andrew
>
> On Thu, Sep 28, 2017 at 3:59 PM, Wiles, Keith <keith.wiles@intel.com<mailto:keith.wiles@intel.com>> wrote:
>
> > On Sep 28, 2017, at 6:06 AM, Mauricio Valdueza <mvaldueza@vmware.com<mailto:mvaldueza@vmware.com>> wrote:
> >
> > Hi Guys;
> >
> > I am testing a Fortville 40Gb nic with PKTgen
> >
> > I see linerate in 40Gb with 156B packet size, but once I decrease size, linerate is far away
>
> In Pktgen the packet count is taken from the hardware registers on the NIC and the bit rate is calculated using those values. Not all NICs flush the TX done queue and from one start command to the next the numbers can be off as the old packets are being recycled with the new size packets. Please try the different sizes and bring down pktgen between runs just to see if that is the problem.
>
> >
> > WITH 158B
> > Link State : <UP-40000-FD> ----TotalRate----
> > Pkts/s Max/Rx : 0/0 0/0
> > Max/Tx : 28090480/28089840 28090480/28089840
> > MBits/s Rx/Tx : 0/40000 0/40000
> > -----------------------------------------------------------------------------------------------
> >
> > WITH 128B
> > Link State : <UP-40000-FD> ----TotalRate----
> > Pkts/s Max/Rx : 0/0 0/0
> > Max/Tx : 33784179/33783908 33784179/33783908
> > MBits/s Rx/Tx : 0/40000 0/40000
> > ------------------------------------------------------------------------------------------------
> >
> > With 64B
> > Link State : <UP-40000-FD> ----TotalRate----
> > Pkts/s Max/Rx : 0/0 0/0
> > Max/Tx : 35944587/35941680 35944587/35941680
> > MBits/s Rx/Tx : 0/24152 0/24152
> > ----------------------------------------------------------------------------------------------
> >
> > Should I run any optimization?
> >
> > My environment is:
> >
> > •VMware ESXi version: 6.5.0, 4887370
> > •Exact NIC version: Intel Corporation XL710 for 40GbE QSFP+
> > •NIC driver version: i40en version 1.3.1
> > •Server Vendor: Dell
> > •Server Make: Dell Inc. PowerEdge R730
> > •CPU Model: I ntel(R) Xeon(R) CPU E5-2697 v3 @ 2.60GHz
> > •Huge pages size: 2M
> > •Test VM: What is it? Ubuntu 16.04
> > • DPDK is compiled there? dpdk-17.08
> > •Test traffic kind: IP/UDP? Both tested
> > Traffic generator: Intel pktgen version? pktgen-3.4.1
> >
> >
> > I am executing:
> >
> > sudo ./app/x86_64-native-linuxapp-gcc/pktgen -c 0xff n 3 --proc-type auto --socket-mem 9096 -- -m "[1:2-7].0" --crc-strip
> >
> >
> > Thanks in advance
> >
> >
> > mauricio
> >
>
> Regards,
> Keith
>
>
Regards,
Keith
^ permalink raw reply [flat|nested] 8+ messages in thread
end of thread, other threads:[~2017-10-09 16:02 UTC | newest]
Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-09-28 11:06 [dpdk-users] Any way to get more that 40Mpps with 64bytes using a XL710 40Gb nic Mauricio Valdueza
2017-09-28 20:59 ` Wiles, Keith
2017-09-29 2:40 ` Andrew Theurer
2017-09-29 4:22 ` Wiles, Keith
2017-09-29 20:53 ` Mauricio Valdueza
2017-09-29 21:30 ` Stephen Hemminger
2017-10-02 11:48 ` Andrew Theurer
2017-10-09 16:02 ` Mauricio Valdueza
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).