DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] [pktgen] pktgen Tx/Rx rates are not the same in loopback testing case
@ 2017-07-21 15:11 Paul Tsvika
  2017-07-21 15:40 ` Wiles, Keith
  0 siblings, 1 reply; 8+ messages in thread
From: Paul Tsvika @ 2017-07-21 15:11 UTC (permalink / raw)
  To: users

Hi developers,

Below is my hardware configuration:

Memory: 32G ( 16 G on two channels )
CPU: Xeon-D 1557 ( 12 cores, hyperthreading - 24 core )
NIC: Intel X552/X557 10GBASE-T

My motherboard has 2 10G ports so that I set up the testing experiment in
loopback.

I have compiled DPDK successfully and below is the steps how I configured
pktgen

$ sudo modprobe uio
$ sudo insmod ./build/kmod/igb_uio.ko
$ sudo ./usertools/dpdkdevbind.py -b igb_uio xxx:xx.0 xxx:xx.1
$ sudo ./app/pktgen -c 0x1f -n 2 -- -P -m "[1:3].0, [2:4].1"
<pktgen> start 0

I use core 1 and core 3 work for port 0 tx/rx respectively. Core 2 and core
4 work for port 1 tx/rx respectively.

I have some questions listed below:

1.
​​( check my attachment d2.png )
The TX rate is almost 10G, however, the RX rate is much lower than TX rate.
I suppose this should not happen. Is my configuration wrong in any place ?
I checked other developers and TX rate normally can reach 999x and RX rate
is almost the same as TX rate.

2.

Please check the attachment ( start_log ).

2.1
*Packet Burst 32, RX Desc 512, TX Desc 1024*, mbufs/port 8192, mbuf cache
1024
Is 32 the default packet size ? How can I configure it to 64 ?
Why RX Desc is different to TX Desc ?

2,2
*Port  0: Link Down <Enable promiscuous mode>*
Port  1: Link Up - speed 10000 Mbps - full-duplex <Enable promiscuous mode>

Port 0 is down at some points, how can I check the root cause to this issue?



Thanks

-- 
P.T
-------------- next part --------------
A non-text attachment was scrubbed...
Name: d2.png
Type: image/png
Size: 108971 bytes
Desc: not available
URL: <http://dpdk.org/ml/archives/users/attachments/20170721/c22e7dbe/attachment.png>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] [pktgen] pktgen Tx/Rx rates are not the same in loopback testing case
  2017-07-21 15:11 [dpdk-users] [pktgen] pktgen Tx/Rx rates are not the same in loopback testing case Paul Tsvika
@ 2017-07-21 15:40 ` Wiles, Keith
  2017-07-22  1:49   ` Paul Tsvika
  0 siblings, 1 reply; 8+ messages in thread
From: Wiles, Keith @ 2017-07-21 15:40 UTC (permalink / raw)
  To: Paul Tsvika; +Cc: users


> On Jul 21, 2017, at 10:11 AM, Paul Tsvika <mozloverinweb@gmail.com> wrote:
> 
> Hi developers,

Attachements at scrubbed from emails to the list, so best to add links or inline the text.

What is the core layout for your system. Can you run the cpu-layout.py tool.

> 
> Below is my hardware configuration:
> 
> Memory: 32G ( 16 G on two channels )
> CPU: Xeon-D 1557 ( 12 cores, hyperthreading - 24 core )
> NIC: Intel X552/X557 10GBASE-T
> 
> My motherboard has 2 10G ports so that I set up the testing experiment in
> loopback.
> 
> I have compiled DPDK successfully and below is the steps how I configured
> pktgen
> 
> $ sudo modprobe uio
> $ sudo insmod ./build/kmod/igb_uio.ko
> $ sudo ./usertools/dpdkdevbind.py -b igb_uio xxx:xx.0 xxx:xx.1
> $ sudo ./app/pktgen -c 0x1f -n 2 -- -P -m "[1:3].0, [2:4].1"
> <pktgen> start 0
> 
> I use core 1 and core 3 work for port 0 tx/rx respectively. Core 2 and core
> 4 work for port 1 tx/rx respectively.
> 
> I have some questions listed below:
> 
> 1.
> ​​( check my attachment d2.png )
> The TX rate is almost 10G, however, the RX rate is much lower than TX rate.
> I suppose this should not happen. Is my configuration wrong in any place ?
> I checked other developers and TX rate normally can reach 999x and RX rate
> is almost the same as TX rate.
> 
> 2.
> 
> Please check the attachment ( start_log ).
> 
> 2.1
> *Packet Burst 32, RX Desc 512, TX Desc 1024*, mbufs/port 8192, mbuf cache
> 1024
> Is 32 the default packet size ? How can I configure it to 64 ?
> Why RX Desc is different to TX Desc ?
> 
> 2,2
> *Port  0: Link Down <Enable promiscuous mode>*
> Port  1: Link Up - speed 10000 Mbps - full-duplex <Enable promiscuous mode>
> 
> Port 0 is down at some points, how can I check the root cause to this issue?
> 
> 
> 
> Thanks
> 
> -- 
> P.T
> -------------- next part --------------
> A non-text attachment was scrubbed...
> Name: d2.png
> Type: image/png
> Size: 108971 bytes
> Desc: not available
> URL: <http://dpdk.org/ml/archives/users/attachments/20170721/c22e7dbe/attachment.png>

Regards,
Keith


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] [pktgen] pktgen Tx/Rx rates are not the same in loopback testing case
  2017-07-21 15:40 ` Wiles, Keith
@ 2017-07-22  1:49   ` Paul Tsvika
  2017-07-25  9:04     ` Paul Tsvika
  2017-07-26 16:44     ` Wiles, Keith
  0 siblings, 2 replies; 8+ messages in thread
From: Paul Tsvika @ 2017-07-22  1:49 UTC (permalink / raw)
  To: Wiles, Keith; +Cc: users

Thanks Keith.

Let me post my questions again and inline the text.

---------------------------------------------------------------------------------------------------------------

Hi developers,

Below is my hardware configuration:

Memory: 32G ( 16 G on two channels )
CPU: Xeon-D 1557
NIC: Intel X552/X557 10GBASE-T

CPU layout: ( Graph generated from cpu_layout.py )

Core and Socket Information (as reported by '/proc/cpuinfo')
============================================================

















*cores =  [0, 1, 2, 3, 4, 5, 8, 9, 10, 11, 12, 13]sockets =  [0]
Socket 0               --------       Core 0  [0, 12]        Core 1  [1,
13]        Core 2  [2, 14]        Core 3  [3, 15]        Core 4  [4,
16]        Core 5  [5, 17]        Core 8  [6, 18]        Core 9  [7,
19]        Core 10 [8, 20]        Core 11 [9, 21]        Core 12 [10,
22]       Core 13 [11, 23]*
=============================================================

My motherboard has 2 10G ports so that I set up the testing experiment in
loopback.

I have compiled DPDK successfully and below is the steps how I configured
pktgen

$ sudo modprobe uio
$ sudo insmod ./build/kmod/igb_uio.ko
$ sudo ./usertools/dpdkdevbind.py -b igb_uio xxx:xx.0 xxx:xx.1
$ sudo ./app/pktgen -c 0x1f -n 2 -- -P -m "[1:3].0, [2:4].1"
<pktgen> start 0

I use core 1 and core 3 work for port 0 tx/rx respectively. Core 2 and core
4 work for port 1 tx/rx respectively.

I have some questions listed below:

1.

I checked other developers who run pktgen successfully should get the TX/RX
rate almost the same and the value should hit 10000 or 99xx.
I have not figured out yet why the rate went down when I run pktgen.






























* Flags:Port      :   P--------------:0   P--------------:1
10318718/0Link State        :       <UP-10000-FD>       <UP-10000-FD>
----TotalRate----Pkts/s Max/Rx     :                 0/0
10318848/8096382      10318848/8096382       Max/Tx     :
14880683/11528400                 0/0     14880683/11528400MBits/s
Rx/Tx     :              0/7747              5440/0
5440/7747Broadcast         :                   0
0Multicast         :                   0                   0  64
Bytes        :                   0          4603033247  65-127
:                   0                   0  128-255
:                   0                   0  256-511
:                   0                   0  512-1023
:                   0                   0  1024-1518
:                   0                   0Runts/Jumbos
:                 0/0                 0/0Errors Rx/Tx
:                 0/0                 0/0Total Rx Pkts
:                   0          4596736394      Tx Pkts     :
6588970508                   0      Rx MBs      :
0             3089006      Tx MBs      :
4427788                   0ARP/ICMP Pkts     :
0/0                 0/0Tx Count/% Rate   :       Forever /100%
Forever /100%Pattern Type      :             abcd...             abcd...Tx
Count/% Rate   :       Forever /100%       Forever /100%PktSize/Tx Burst
:           64 /   32           64 /   32 -------------------Src/Dest
Port     :         1234 / 5678         1234 / 5678sPkt Type:VLAN ID  :
IPv4 / TCP:0001     IPv4 / TCP:0001Dst  IP Address   :
192.168.1.1         192.168.0.1Src  IP Address   :      192.168.0.1/24
<http://192.168.0.1/24>      192.168.1.1/24 <http://192.168.1.1/24>
-------------------Dst MAC Address   :   0c:c4:7a:c5:43:7b
0c:c4:7a:c5:43:7a-- Pktgen Ver: 3.3:   8086:15ad/03:00.0
8086:15ad/03:00.1 -------------------*


2.1

My start_log below:

Copyright (c) <2010-2017>, Intel Corporation. All rights reserved.
   Pktgen created by: Keith Wiles -- >>> Powered by Intel® DPDK <<<

>>> Packet Burst 32, RX Desc 512, TX Desc 1024, mbufs/port 8192, mbuf cache
1024

*[PT]: looks like packet size is 32, should it suppose to be 64? Can I
configure it?*
*Also, why RX Desc and TX Desc is not the same ?*

=== port to lcore mapping table (# lcores 5) ===
   lcore:    0       1       2       3       4      Total
port   0: ( D: T) ( 1: 0) ( 0: 0) ( 0: 1) ( 0: 0) = ( 1: 1)
port   1: ( D: T) ( 0: 0) ( 1: 0) ( 0: 0) ( 0: 1) = ( 1: 1)
Total   : ( 0: 0) ( 1: 0) ( 1: 0) ( 0: 1) ( 0: 1)
  Display and Timer on lcore 0, rx:tx counts per port/lcore

Configuring 2 ports, MBUF Size 2176, MBUF Cache Size 1024
Lcore:
    1, RX-Only
                RX_cnt( 1): (pid= 0:qid= 0)
    2, RX-Only
                RX_cnt( 1): (pid= 1:qid= 0)
    3, TX-Only
                TX_cnt( 1): (pid= 0:qid= 0)
    4, TX-Only
                TX_cnt( 1): (pid= 1:qid= 0)

Port :
    0, nb_lcores  2, private 0x986ec0, lcores:  1  3
    1, nb_lcores  2, private 0x989220, lcores:  2  4



** Default Info (0000:03:00.0, if_index:0) **
   max_vfs        :   0, min_rx_bufsize    :1024, max_rx_pktlen : 15872
   max_rx_queues  : 128, max_tx_queues     :  64
   max_mac_addrs  : 128, max_hash_mac_addrs:4096, max_vmdq_pools:    64
   rx_offload_capa:  79, tx_offload_capa   : 191, reta_size     :   512,
flow_type_rss_offloads:0000000000038d34
   vmdq_queue_base:   0, vmdq_queue_num    : 128, vmdq_pool_base:     0
** RX Conf **
   pthresh        :   8, hthresh          :   8, wthresh        :     0
   Free Thresh    :  32, Drop Enable      :   0, Deferred Start :     0
** TX Conf **
   pthresh        :  32, hthresh          :   0, wthresh        :     0
   Free Thresh    :  32, RS Thresh        :  32, Deferred Start :     0,
TXQ Flags:00000f01

    Create: Default RX  0:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
128)) + 192 =  18433 KB headroom 128 2176
      Set RX queue stats mapping pid 0, q 0, lcore 1


    Create: Default TX  0:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
128)) + 192 =  18433 KB headroom 128 2176
    Create: Range TX    0:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
128)) + 192 =  18433 KB headroom 128 2176
    Create: Sequence TX 0:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
128)) + 192 =  18433 KB headroom 128 2176
    Create: Special TX  0:0  - Memory used (MBUFs   64 x (size 2176 + Hdr
128)) + 192 =    145 KB headroom 128 2176

                                                                       Port
memory used =  73873 KB
Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 0c:c4:7a:c5:43:7a

** Default Info (0000:03:00.1, if_index:0) **
   max_vfs        :   0, min_rx_bufsize    :1024, max_rx_pktlen : 15872
   max_rx_queues  : 128, max_tx_queues     :  64
   max_mac_addrs  : 128, max_hash_mac_addrs:4096, max_vmdq_pools:    64
   rx_offload_capa:  79, tx_offload_capa   : 191, reta_size     :   512,
flow_type_rss_offloads:0000000000038d34
   vmdq_queue_base:   0, vmdq_queue_num    : 128, vmdq_pool_base:     0
** RX Conf **
   pthresh        :   8, hthresh          :   8, wthresh        :     0
   Free Thresh    :  32, Drop Enable      :   0, Deferred Start :     0
** TX Conf **
   pthresh        :  32, hthresh          :   0, wthresh        :     0
   Free Thresh    :  32, RS Thresh        :  32, Deferred Start :     0,
TXQ Flags:00000f01

    Create: Default RX  1:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
128)) + 192 =  18433 KB headroom 128 2176
      Set RX queue stats mapping pid 1, q 0, lcore 2


    Create: Default TX  1:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
128)) + 192 =  18433 KB headroom 128 2176
    Create: Range TX    1:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
128)) + 192 =  18433 KB headroom 128 2176
    Create: Sequence TX 1:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
128)) + 192 =  18433 KB headroom 128 2176
    Create: Special TX  1:0  - Memory used (MBUFs   64 x (size 2176 + Hdr
128)) + 192 =    145 KB headroom 128 2176

                                                                       Port
memory used =  73873 KB
Initialize Port 1 -- TxQ 1, RxQ 1,  Src MAC 0c:c4:7a:c5:43:7b
                                                                      Total
memory used = 147746 KB
Port  0: Link Down <Enable promiscuous mode>
*[PT]: I don't know why the Port ) went down at this point. Is there any
other thing I should check ?*

Port  1: Link Up - speed 10000 Mbps - full-duplex <Enable promiscuous mode>

pktgen_packet_capture_init: mz->len 4194304

=== Display processing on lcore 0
  RX processing lcore:   1 rx:  1 tx:  0
For RX found 1 port(s) for lcore 1
  RX processing lcore:   2 rx:  1 tx:  0
For RX found 1 port(s) for lcore 2
  TX processing lcore:   3 rx:  0 tx:  1
For TX found 1 port(s) for lcore 3
  TX processing lcore:   4 rx:  0 tx:  1
For TX found 1 port(s) for lcore 4





Thanks for any feedback.


-- 
P.T

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] [pktgen] pktgen Tx/Rx rates are not the same in loopback testing case
  2017-07-22  1:49   ` Paul Tsvika
@ 2017-07-25  9:04     ` Paul Tsvika
  2017-07-25 14:10       ` Wiles, Keith
  2017-07-26 16:44     ` Wiles, Keith
  1 sibling, 1 reply; 8+ messages in thread
From: Paul Tsvika @ 2017-07-25  9:04 UTC (permalink / raw)
  To: users

Hi

I've some updates and added inline text in my post.


Thanks.


>
> Memory: 32G ( 16 G on two channels )
> CPU: Xeon-D 1557
> NIC: Intel X552/X557 10GBASE-T
>
> CPU layout: ( Graph generated from cpu_layout.py )
>
> Core and Socket Information (as reported by '/proc/cpuinfo')
> ============================================================
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *cores =  [0, 1, 2, 3, 4, 5, 8, 9, 10, 11, 12, 13]sockets =  [0]
> Socket 0               --------       Core 0  [0, 12]        Core 1  [1,
> 13]        Core 2  [2, 14]        Core 3  [3, 15]        Core 4  [4,
> 16]        Core 5  [5, 17]        Core 8  [6, 18]        Core 9  [7,
> 19]        Core 10 [8, 20]        Core 11 [9, 21]        Core 12 [10,
> 22]       Core 13 [11, 23]*
> =============================================================
>
> My motherboard has 2 10G ports so that I set up the testing experiment in
> loopback.
>
> I have compiled DPDK successfully and below is the steps how I configured
> pktgen
>
> $ sudo modprobe uio
> $ sudo insmod ./build/kmod/igb_uio.ko
> $ sudo ./usertools/dpdkdevbind.py -b igb_uio xxx:xx.0 xxx:xx.1
> $ sudo ./app/pktgen -c 0x1f -n 2 -- -P -m "[1:3].0, [2:4].1"
> <pktgen> start 0
>
> I use core 1 and core 3 work for port 0 tx/rx respectively. Core 2 and
> core 4 work for port 1 tx/rx respectively.
>
> I have some questions listed below:
>
> 1.
>
> I checked other developers who run pktgen successfully should get the
> TX/RX rate almost the same and the value should hit 10000 or 99xx.
> I have not figured out yet why the rate went down when I run pktgen.
>
>
>
>
>
> * Flags:Port      :   P--------------:0   P--------------:1
> 10318718/0Link State        :       <UP-10000-FD>       <UP-10000-FD>
> ----TotalRate----Pkts/s Max/Rx     :                 0/0
> 10318848/8096382      10318848/8096382       Max/Tx     :
> 14880683/11528400                 0/0     14880683/11528400MBits/s
> Rx/Tx     :              0/7747              5440/0             5440/7747
>  *
>
*[PT 1:] Any developer has the experience to this ? *

>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
> *Broadcast         :                   0
> 0Multicast         :                   0                   0  64
> Bytes        :                   0          4603033247  65-127
> :                   0                   0  128-255
> :                   0                   0  256-511
> :                   0                   0  512-1023
> :                   0                   0  1024-1518
> :                   0                   0Runts/Jumbos
> :                 0/0                 0/0Errors Rx/Tx
> :                 0/0                 0/0Total Rx Pkts
> :                   0          4596736394      Tx Pkts     :
> 6588970508                   0      Rx MBs      :
> 0             3089006      Tx MBs      :
> 4427788                   0ARP/ICMP Pkts     :
> 0/0                 0/0Tx Count/% Rate   :       Forever /100%
> Forever /100%Pattern Type      :             abcd...             abcd...Tx
> Count/% Rate   :       Forever /100%       Forever /100%PktSize/Tx Burst
> :           64 /   32           64 /   32 -------------------Src/Dest
> Port     :         1234 / 5678         1234 / 5678sPkt Type:VLAN ID  :
> IPv4 / TCP:0001     IPv4 / TCP:0001Dst  IP Address   :
> 192.168.1.1         192.168.0.1Src  IP Address   :      192.168.0.1/24
> <http://192.168.0.1/24>      192.168.1.1/24 <http://192.168.1.1/24>
> -------------------Dst MAC Address   :   0c:c4:7a:c5:43:7b
> 0c:c4:7a:c5:43:7a-- Pktgen Ver: 3.3:   8086:15ad/03:00.0
> 8086:15ad/03:00.1 -------------------*
>
>
> 2.1
>
> My start_log below:
>
> Copyright (c) <2010-2017>, Intel Corporation. All rights reserved.
>    Pktgen created by: Keith Wiles -- >>> Powered by Intel® DPDK <<<
>
> >>> Packet Burst 32, RX Desc 512, TX Desc 1024, mbufs/port 8192, mbuf
> cache 1024
>
> *[PT]: looks like packet size is 32, should it suppose to be 64? Can I
> configure it?*
> *Also, why RX Desc and TX Desc is not the same ?*
>
> === port to lcore mapping table (# lcores 5) ===
>    lcore:    0       1       2       3       4      Total
> port   0: ( D: T) ( 1: 0) ( 0: 0) ( 0: 1) ( 0: 0) = ( 1: 1)
> port   1: ( D: T) ( 0: 0) ( 1: 0) ( 0: 0) ( 0: 1) = ( 1: 1)
> Total   : ( 0: 0) ( 1: 0) ( 1: 0) ( 0: 1) ( 0: 1)
>   Display and Timer on lcore 0, rx:tx counts per port/lcore
>
> Configuring 2 ports, MBUF Size 2176, MBUF Cache Size 1024
> Lcore:
>     1, RX-Only
>                 RX_cnt( 1): (pid= 0:qid= 0)
>     2, RX-Only
>                 RX_cnt( 1): (pid= 1:qid= 0)
>     3, TX-Only
>                 TX_cnt( 1): (pid= 0:qid= 0)
>     4, TX-Only
>                 TX_cnt( 1): (pid= 1:qid= 0)
>
> Port :
>     0, nb_lcores  2, private 0x986ec0, lcores:  1  3
>     1, nb_lcores  2, private 0x989220, lcores:  2  4
>
>
>
> ** Default Info (0000:03:00.0, if_index:0) **
>    max_vfs        :   0, min_rx_bufsize    :1024, max_rx_pktlen : 15872
>    max_rx_queues  : 128, max_tx_queues     :  64
>    max_mac_addrs  : 128, max_hash_mac_addrs:4096, max_vmdq_pools:    64
>    rx_offload_capa:  79, tx_offload_capa   : 191, reta_size     :   512,
> flow_type_rss_offloads:0000000000038d34
>    vmdq_queue_base:   0, vmdq_queue_num    : 128, vmdq_pool_base:     0
> ** RX Conf **
>    pthresh        :   8, hthresh          :   8, wthresh        :     0
>    Free Thresh    :  32, Drop Enable      :   0, Deferred Start :     0
> ** TX Conf **
>    pthresh        :  32, hthresh          :   0, wthresh        :     0
>    Free Thresh    :  32, RS Thresh        :  32, Deferred Start :     0,
> TXQ Flags:00000f01
>
>     Create: Default RX  0:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
> 128)) + 192 =  18433 KB headroom 128 2176
>       Set RX queue stats mapping pid 0, q 0, lcore 1
>
>
>     Create: Default TX  0:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
> 128)) + 192 =  18433 KB headroom 128 2176
>     Create: Range TX    0:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
> 128)) + 192 =  18433 KB headroom 128 2176
>     Create: Sequence TX 0:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
> 128)) + 192 =  18433 KB headroom 128 2176
>     Create: Special TX  0:0  - Memory used (MBUFs   64 x (size 2176 + Hdr
> 128)) + 192 =    145 KB headroom 128 2176
>
>
> Port memory used =  73873 KB
> Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 0c:c4:7a:c5:43:7a
>
> ** Default Info (0000:03:00.1, if_index:0) **
>    max_vfs        :   0, min_rx_bufsize    :1024, max_rx_pktlen : 15872
>    max_rx_queues  : 128, max_tx_queues     :  64
>    max_mac_addrs  : 128, max_hash_mac_addrs:4096, max_vmdq_pools:    64
>    rx_offload_capa:  79, tx_offload_capa   : 191, reta_size     :   512,
> flow_type_rss_offloads:0000000000038d34
>    vmdq_queue_base:   0, vmdq_queue_num    : 128, vmdq_pool_base:     0
> ** RX Conf **
>    pthresh        :   8, hthresh          :   8, wthresh        :     0
>    Free Thresh    :  32, Drop Enable      :   0, Deferred Start :     0
> ** TX Conf **
>    pthresh        :  32, hthresh          :   0, wthresh        :     0
>    Free Thresh    :  32, RS Thresh        :  32, Deferred Start :     0,
> TXQ Flags:00000f01
>
>     Create: Default RX  1:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
> 128)) + 192 =  18433 KB headroom 128 2176
>       Set RX queue stats mapping pid 1, q 0, lcore 2
>
>
>     Create: Default TX  1:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
> 128)) + 192 =  18433 KB headroom 128 2176
>     Create: Range TX    1:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
> 128)) + 192 =  18433 KB headroom 128 2176
>     Create: Sequence TX 1:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
> 128)) + 192 =  18433 KB headroom 128 2176
>     Create: Special TX  1:0  - Memory used (MBUFs   64 x (size 2176 + Hdr
> 128)) + 192 =    145 KB headroom 128 2176
>
>
> Port memory used =  73873 KB
> Initialize Port 1 -- TxQ 1, RxQ 1,  Src MAC 0c:c4:7a:c5:43:7b
>
> Total memory used = 147746 KB
> Port  0: Link Down <Enable promiscuous mode>
> *[PT]: I don't know why the Port ) went down at this point. Is there any
> other thing I should check ?*
>
*   [PT 2]: After updating the NIC firmware, the drop issue has been
fixed. *

>
>
> Port  1: Link Up - speed 10000 Mbps - full-duplex <Enable promiscuous mode>
>
> pktgen_packet_capture_init: mz->len 4194304
>
> === Display processing on lcore 0
>   RX processing lcore:   1 rx:  1 tx:  0
> For RX found 1 port(s) for lcore 1
>   RX processing lcore:   2 rx:  1 tx:  0
> For RX found 1 port(s) for lcore 2
>   TX processing lcore:   3 rx:  0 tx:  1
> For TX found 1 port(s) for lcore 3
>   TX processing lcore:   4 rx:  0 tx:  1
> For TX found 1 port(s) for lcore 4
>
>
>
>
>
> Thanks for any feedback.
>
>
> --
> P.T
>



-- 
P.T

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] [pktgen] pktgen Tx/Rx rates are not the same in loopback testing case
  2017-07-25  9:04     ` Paul Tsvika
@ 2017-07-25 14:10       ` Wiles, Keith
  2017-07-26  9:02         ` Paul Tsvika
  0 siblings, 1 reply; 8+ messages in thread
From: Wiles, Keith @ 2017-07-25 14:10 UTC (permalink / raw)
  To: Paul Tsvika; +Cc: users


> On Jul 25, 2017, at 4:04 AM, Paul Tsvika <mozloverinweb@gmail.com> wrote:
> 
> Hi
> 
> I've some updates and added inline text in my post.
> 
> 
> Thanks.
> 
> 
>> 
>> Memory: 32G ( 16 G on two channels )
>> CPU: Xeon-D 1557
>> NIC: Intel X552/X557 10GBASE-T
>> 
>> CPU layout: ( Graph generated from cpu_layout.py )
>> 
>> Core and Socket Information (as reported by '/proc/cpuinfo')
>> ============================================================
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> *cores =  [0, 1, 2, 3, 4, 5, 8, 9, 10, 11, 12, 13]sockets =  [0]
>> Socket 0               --------       Core 0  [0, 12]        Core 1  [1,
>> 13]        Core 2  [2, 14]        Core 3  [3, 15]        Core 4  [4,
>> 16]        Core 5  [5, 17]        Core 8  [6, 18]        Core 9  [7,
>> 19]        Core 10 [8, 20]        Core 11 [9, 21]        Core 12 [10,
>> 22]       Core 13 [11, 23]*
>> =============================================================
>> 
>> My motherboard has 2 10G ports so that I set up the testing experiment in
>> loopback.
>> 
>> I have compiled DPDK successfully and below is the steps how I configured
>> pktgen
>> 
>> $ sudo modprobe uio
>> $ sudo insmod ./build/kmod/igb_uio.ko
>> $ sudo ./usertools/dpdkdevbind.py -b igb_uio xxx:xx.0 xxx:xx.1
>> $ sudo ./app/pktgen -c 0x1f -n 2 -- -P -m "[1:3].0, [2:4].1"
>> <pktgen> start 0
>> 
>> I use core 1 and core 3 work for port 0 tx/rx respectively. Core 2 and
>> core 4 work for port 1 tx/rx respectively.
>> 
>> I have some questions listed below:
>> 
>> 1.
>> 
>> I checked other developers who run pktgen successfully should get the
>> TX/RX rate almost the same and the value should hit 10000 or 99xx.
>> I have not figured out yet why the rate went down when I run pktgen.
>> 
>> 
>> 
>> 
>> 
>> * Flags:Port      :   P--------------:0   P--------------:1
>> 10318718/0Link State        :       <UP-10000-FD>       <UP-10000-FD>
>> ----TotalRate----Pkts/s Max/Rx     :                 0/0
>> 10318848/8096382      10318848/8096382       Max/Tx     :
>> 14880683/11528400                 0/0     14880683/11528400MBits/s
>> Rx/Tx     :              0/7747              5440/0             5440/7747
>> *
>> 
> *[PT 1:] Any developer has the experience to this ? *


The only other way you can get a performance difference is the NIC’s are in PCIe slots which are not able to handle the performance. Did you check if the slots are both 8xPCIe slots.

> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> *Broadcast         :                   0
>> 0Multicast         :                   0                   0  64
>> Bytes        :                   0          4603033247  65-127
>> :                   0                   0  128-255
>> :                   0                   0  256-511
>> :                   0                   0  512-1023
>> :                   0                   0  1024-1518
>> :                   0                   0Runts/Jumbos
>> :                 0/0                 0/0Errors Rx/Tx
>> :                 0/0                 0/0Total Rx Pkts
>> :                   0          4596736394      Tx Pkts     :
>> 6588970508                   0      Rx MBs      :
>> 0             3089006      Tx MBs      :
>> 4427788                   0ARP/ICMP Pkts     :
>> 0/0                 0/0Tx Count/% Rate   :       Forever /100%
>> Forever /100%Pattern Type      :             abcd...             abcd...Tx
>> Count/% Rate   :       Forever /100%       Forever /100%PktSize/Tx Burst
>> :           64 /   32           64 /   32 -------------------Src/Dest
>> Port     :         1234 / 5678         1234 / 5678sPkt Type:VLAN ID  :
>> IPv4 / TCP:0001     IPv4 / TCP:0001Dst  IP Address   :
>> 192.168.1.1         192.168.0.1Src  IP Address   :      192.168.0.1/24
>> <http://192.168.0.1/24>      192.168.1.1/24 <http://192.168.1.1/24>
>> -------------------Dst MAC Address   :   0c:c4:7a:c5:43:7b
>> 0c:c4:7a:c5:43:7a-- Pktgen Ver: 3.3:   8086:15ad/03:00.0
>> 8086:15ad/03:00.1 -------------------*
>> 
>> 
>> 2.1
>> 
>> My start_log below:
>> 
>> Copyright (c) <2010-2017>, Intel Corporation. All rights reserved.
>>   Pktgen created by: Keith Wiles -- >>> Powered by Intel® DPDK <<<
>> 
>>>>> Packet Burst 32, RX Desc 512, TX Desc 1024, mbufs/port 8192, mbuf
>> cache 1024
>> 
>> *[PT]: looks like packet size is 32, should it suppose to be 64? Can I
>> configure it?*
>> *Also, why RX Desc and TX Desc is not the same ?*
>> 
>> === port to lcore mapping table (# lcores 5) ===
>>   lcore:    0       1       2       3       4      Total
>> port   0: ( D: T) ( 1: 0) ( 0: 0) ( 0: 1) ( 0: 0) = ( 1: 1)
>> port   1: ( D: T) ( 0: 0) ( 1: 0) ( 0: 0) ( 0: 1) = ( 1: 1)
>> Total   : ( 0: 0) ( 1: 0) ( 1: 0) ( 0: 1) ( 0: 1)
>>  Display and Timer on lcore 0, rx:tx counts per port/lcore
>> 
>> Configuring 2 ports, MBUF Size 2176, MBUF Cache Size 1024
>> Lcore:
>>    1, RX-Only
>>                RX_cnt( 1): (pid= 0:qid= 0)
>>    2, RX-Only
>>                RX_cnt( 1): (pid= 1:qid= 0)
>>    3, TX-Only
>>                TX_cnt( 1): (pid= 0:qid= 0)
>>    4, TX-Only
>>                TX_cnt( 1): (pid= 1:qid= 0)
>> 
>> Port :
>>    0, nb_lcores  2, private 0x986ec0, lcores:  1  3
>>    1, nb_lcores  2, private 0x989220, lcores:  2  4
>> 
>> 
>> 
>> ** Default Info (0000:03:00.0, if_index:0) **
>>   max_vfs        :   0, min_rx_bufsize    :1024, max_rx_pktlen : 15872
>>   max_rx_queues  : 128, max_tx_queues     :  64
>>   max_mac_addrs  : 128, max_hash_mac_addrs:4096, max_vmdq_pools:    64
>>   rx_offload_capa:  79, tx_offload_capa   : 191, reta_size     :   512,
>> flow_type_rss_offloads:0000000000038d34
>>   vmdq_queue_base:   0, vmdq_queue_num    : 128, vmdq_pool_base:     0
>> ** RX Conf **
>>   pthresh        :   8, hthresh          :   8, wthresh        :     0
>>   Free Thresh    :  32, Drop Enable      :   0, Deferred Start :     0
>> ** TX Conf **
>>   pthresh        :  32, hthresh          :   0, wthresh        :     0
>>   Free Thresh    :  32, RS Thresh        :  32, Deferred Start :     0,
>> TXQ Flags:00000f01
>> 
>>    Create: Default RX  0:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
>> 128)) + 192 =  18433 KB headroom 128 2176
>>      Set RX queue stats mapping pid 0, q 0, lcore 1
>> 
>> 
>>    Create: Default TX  0:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
>> 128)) + 192 =  18433 KB headroom 128 2176
>>    Create: Range TX    0:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
>> 128)) + 192 =  18433 KB headroom 128 2176
>>    Create: Sequence TX 0:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
>> 128)) + 192 =  18433 KB headroom 128 2176
>>    Create: Special TX  0:0  - Memory used (MBUFs   64 x (size 2176 + Hdr
>> 128)) + 192 =    145 KB headroom 128 2176
>> 
>> 
>> Port memory used =  73873 KB
>> Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 0c:c4:7a:c5:43:7a
>> 
>> ** Default Info (0000:03:00.1, if_index:0) **
>>   max_vfs        :   0, min_rx_bufsize    :1024, max_rx_pktlen : 15872
>>   max_rx_queues  : 128, max_tx_queues     :  64
>>   max_mac_addrs  : 128, max_hash_mac_addrs:4096, max_vmdq_pools:    64
>>   rx_offload_capa:  79, tx_offload_capa   : 191, reta_size     :   512,
>> flow_type_rss_offloads:0000000000038d34
>>   vmdq_queue_base:   0, vmdq_queue_num    : 128, vmdq_pool_base:     0
>> ** RX Conf **
>>   pthresh        :   8, hthresh          :   8, wthresh        :     0
>>   Free Thresh    :  32, Drop Enable      :   0, Deferred Start :     0
>> ** TX Conf **
>>   pthresh        :  32, hthresh          :   0, wthresh        :     0
>>   Free Thresh    :  32, RS Thresh        :  32, Deferred Start :     0,
>> TXQ Flags:00000f01
>> 
>>    Create: Default RX  1:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
>> 128)) + 192 =  18433 KB headroom 128 2176
>>      Set RX queue stats mapping pid 1, q 0, lcore 2
>> 
>> 
>>    Create: Default TX  1:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
>> 128)) + 192 =  18433 KB headroom 128 2176
>>    Create: Range TX    1:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
>> 128)) + 192 =  18433 KB headroom 128 2176
>>    Create: Sequence TX 1:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr
>> 128)) + 192 =  18433 KB headroom 128 2176
>>    Create: Special TX  1:0  - Memory used (MBUFs   64 x (size 2176 + Hdr
>> 128)) + 192 =    145 KB headroom 128 2176
>> 
>> 
>> Port memory used =  73873 KB
>> Initialize Port 1 -- TxQ 1, RxQ 1,  Src MAC 0c:c4:7a:c5:43:7b
>> 
>> Total memory used = 147746 KB
>> Port  0: Link Down <Enable promiscuous mode>
>> *[PT]: I don't know why the Port ) went down at this point. Is there any
>> other thing I should check ?*
>> 
> *   [PT 2]: After updating the NIC firmware, the drop issue has been
> fixed. *
> 
>> 
>> 
>> Port  1: Link Up - speed 10000 Mbps - full-duplex <Enable promiscuous mode>
>> 
>> pktgen_packet_capture_init: mz->len 4194304
>> 
>> === Display processing on lcore 0
>>  RX processing lcore:   1 rx:  1 tx:  0
>> For RX found 1 port(s) for lcore 1
>>  RX processing lcore:   2 rx:  1 tx:  0
>> For RX found 1 port(s) for lcore 2
>>  TX processing lcore:   3 rx:  0 tx:  1
>> For TX found 1 port(s) for lcore 3
>>  TX processing lcore:   4 rx:  0 tx:  1
>> For TX found 1 port(s) for lcore 4
>> 
>> 
>> 
>> 
>> 
>> Thanks for any feedback.
>> 
>> 
>> --
>> P.T
>> 
> 
> 
> 
> -- 
> P.T

Regards,
Keith


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] [pktgen] pktgen Tx/Rx rates are not the same in loopback testing case
  2017-07-25 14:10       ` Wiles, Keith
@ 2017-07-26  9:02         ` Paul Tsvika
  2017-07-26 14:55           ` Wiles, Keith
  0 siblings, 1 reply; 8+ messages in thread
From: Paul Tsvika @ 2017-07-26  9:02 UTC (permalink / raw)
  To: Wiles, Keith; +Cc: users

Hi Keith.

This is a loopback test and I think it somehow affects the performance
since there is only one NIC and phy on the mother board.

I tried two mother boards and the tx/rx ( 9611/9497 ) rates look more
reasonable.

I can also set packet size in pktgen command line.

However, I still don't understand why  RX Desc 512, TX Desc 1024 are not
the same.

Should the values suppose to be the same ?



Regards-


P.T

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] [pktgen] pktgen Tx/Rx rates are not the same in loopback testing case
  2017-07-26  9:02         ` Paul Tsvika
@ 2017-07-26 14:55           ` Wiles, Keith
  0 siblings, 0 replies; 8+ messages in thread
From: Wiles, Keith @ 2017-07-26 14:55 UTC (permalink / raw)
  To: Paul Tsvika; +Cc: users


> On Jul 26, 2017, at 4:02 AM, Paul Tsvika <mozloverinweb@gmail.com> wrote:
> 
> Hi Keith.
> 
> This is a loopback test and I think it somehow affects the performance since there is only one NIC and phy on the mother board.
> 
> I tried two mother boards and the tx/rx ( 9611/9497 ) rates look more reasonable.
> 
> I can also set packet size in pktgen command line. 

If Pktgen performance is OK in other motherboards, then the problem with this machine is most likely not a Pktgen problem. You did not answer my question about the PCIe slots in my previous email.

> 
> However, I still don't understand why  RX Desc 512, TX Desc 1024 are not the same.
> 
> Should the values suppose to be the same ?

The increased TX ring size is to reduce the number of times a TX done cleanup occurs.

> 
> 
> 
> Regards-
> 
> 
> P.T
> 
>  

Regards,
Keith

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [dpdk-users] [pktgen] pktgen Tx/Rx rates are not the same in loopback testing case
  2017-07-22  1:49   ` Paul Tsvika
  2017-07-25  9:04     ` Paul Tsvika
@ 2017-07-26 16:44     ` Wiles, Keith
  1 sibling, 0 replies; 8+ messages in thread
From: Wiles, Keith @ 2017-07-26 16:44 UTC (permalink / raw)
  To: Paul Tsvika; +Cc: users

Send only plain text to the list please.

> On Jul 21, 2017, at 8:49 PM, Paul Tsvika <mozloverinweb@gmail.com> wrote:
> 
> Thanks Keith.
> 
> Let me post my questions again and inline the text.
> 
> ---------------------------------------------------------------------------------------------------------------
> 
> Hi developers,
> 
> Below is my hardware configuration:
> 
> Memory: 32G ( 16 G on two channels )
> CPU: Xeon-D 1557 
> NIC: Intel X552/X557 10GBASE-T
> 
> CPU layout: ( Graph generated from cpu_layout.py )
> 
> Core and Socket Information (as reported by '/proc/cpuinfo')
> ============================================================
> 
> cores =  [0, 1, 2, 3, 4, 5, 8, 9, 10, 11, 12, 13]
> sockets =  [0]
> 
>         Socket 0       
>         --------       
> Core 0  [0, 12]        
> Core 1  [1, 13]        
> Core 2  [2, 14]        
> Core 3  [3, 15]        
> Core 4  [4, 16]        
> Core 5  [5, 17]        
> Core 8  [6, 18]        
> Core 9  [7, 19]        
> Core 10 [8, 20]        
> Core 11 [9, 21]        
> Core 12 [10, 22]       
> Core 13 [11, 23]
> =============================================================
> 
> My motherboard has 2 10G ports so that I set up the testing experiment in loopback. 
> 
> I have compiled DPDK successfully and below is the steps how I configured pktgen
> 
> $ sudo modprobe uio
> $ sudo insmod ./build/kmod/igb_uio.ko
> $ sudo ./usertools/dpdkdevbind.py -b igb_uio xxx:xx.0 xxx:xx.1
> $ sudo ./app/pktgen -c 0x1f -n 2 -- -P -m "[1:3].0, [2:4].1" 
> <pktgen> start 0
> 
> I use core 1 and core 3 work for port 0 tx/rx respectively. Core 2 and core 4 work for port 1 tx/rx respectively. 
> 
> I have some questions listed below:
> 
> 1.
> 
> I checked other developers who run pktgen successfully should get the TX/RX rate almost the same and the value should hit 10000 or 99xx.
> I have not figured out yet why the rate went down when I run pktgen. 
> 
>  Flags:Port      :   P--------------:0   P--------------:1            10318718/0
> Link State        :       <UP-10000-FD>       <UP-10000-FD>     ----TotalRate----
> Pkts/s Max/Rx     :                 0/0    10318848/8096382      10318848/8096382
>        Max/Tx     :   14880683/11528400                 0/0     14880683/11528400
> MBits/s Rx/Tx     :              0/7747              5440/0             5440/7747
> Broadcast         :                   0                   0
> Multicast         :                   0                   0
>   64 Bytes        :                   0          4603033247
>   65-127          :                   0                   0
>   128-255         :                   0                   0
>   256-511         :                   0                   0
>   512-1023        :                   0                   0
>   1024-1518       :                   0                   0
> Runts/Jumbos      :                 0/0                 0/0
> Errors Rx/Tx      :                 0/0                 0/0
> Total Rx Pkts     :                   0          4596736394
>       Tx Pkts     :          6588970508                   0
>       Rx MBs      :                   0             3089006
>       Tx MBs      :             4427788                   0
> ARP/ICMP Pkts     :                 0/0                 0/0
> Tx Count/% Rate   :       Forever /100%       Forever /100%
> Pattern Type      :             abcd...             abcd...
> Tx Count/% Rate   :       Forever /100%       Forever /100%
> PktSize/Tx Burst  :           64 /   32           64 /   32 -------------------
> Src/Dest Port     :         1234 / 5678         1234 / 5678s
> Pkt Type:VLAN ID  :     IPv4 / TCP:0001     IPv4 / TCP:0001
> Dst  IP Address   :         192.168.1.1         192.168.0.1
> Src  IP Address   :      192.168.0.1/24      192.168.1.1/24 -------------------
> Dst MAC Address   :   0c:c4:7a:c5:43:7b   0c:c4:7a:c5:43:7a
> -- Pktgen Ver: 3.3:   8086:15ad/03:00.0   8086:15ad/03:00.1 -------------------
> 
> 
> 2.1
> 
> My start_log below:
> 
> Copyright (c) <2010-2017>, Intel Corporation. All rights reserved.
>    Pktgen created by: Keith Wiles -- >>> Powered by Intel® DPDK <<<
> 
> >>> Packet Burst 32, RX Desc 512, TX Desc 1024, mbufs/port 8192, mbuf cache 1024 
> [PT]: looks like packet size is 32, should it suppose to be 64? Can I configure it?


32 is the number of packets in a burst

> Also, why RX Desc and TX Desc is not the same ?
> 
> === port to lcore mapping table (# lcores 5) ===
>    lcore:    0       1       2       3       4      Total
> port   0: ( D: T) ( 1: 0) ( 0: 0) ( 0: 1) ( 0: 0) = ( 1: 1)
> port   1: ( D: T) ( 0: 0) ( 1: 0) ( 0: 0) ( 0: 1) = ( 1: 1)
> Total   : ( 0: 0) ( 1: 0) ( 1: 0) ( 0: 1) ( 0: 1)
>   Display and Timer on lcore 0, rx:tx counts per port/lcore
> 
> Configuring 2 ports, MBUF Size 2176, MBUF Cache Size 1024
> Lcore:
>     1, RX-Only
>                 RX_cnt( 1): (pid= 0:qid= 0) 
>     2, RX-Only
>                 RX_cnt( 1): (pid= 1:qid= 0) 
>     3, TX-Only
>                 TX_cnt( 1): (pid= 0:qid= 0) 
>     4, TX-Only
>                 TX_cnt( 1): (pid= 1:qid= 0) 
> 
> Port :
>     0, nb_lcores  2, private 0x986ec0, lcores:  1  3 
>     1, nb_lcores  2, private 0x989220, lcores:  2  4 
> 
> 
> 
> ** Default Info (0000:03:00.0, if_index:0) **
>    max_vfs        :   0, min_rx_bufsize    :1024, max_rx_pktlen : 15872
>    max_rx_queues  : 128, max_tx_queues     :  64
>    max_mac_addrs  : 128, max_hash_mac_addrs:4096, max_vmdq_pools:    64
>    rx_offload_capa:  79, tx_offload_capa   : 191, reta_size     :   512, flow_type_rss_offloads:0000000000038d34
>    vmdq_queue_base:   0, vmdq_queue_num    : 128, vmdq_pool_base:     0
> ** RX Conf **
>    pthresh        :   8, hthresh          :   8, wthresh        :     0
>    Free Thresh    :  32, Drop Enable      :   0, Deferred Start :     0
> ** TX Conf **
>    pthresh        :  32, hthresh          :   0, wthresh        :     0
>    Free Thresh    :  32, RS Thresh        :  32, Deferred Start :     0, TXQ Flags:00000f01
> 
>     Create: Default RX  0:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr 128)) + 192 =  18433 KB headroom 128 2176
>       Set RX queue stats mapping pid 0, q 0, lcore 1
> 
> 
>     Create: Default TX  0:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr 128)) + 192 =  18433 KB headroom 128 2176
>     Create: Range TX    0:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr 128)) + 192 =  18433 KB headroom 128 2176
>     Create: Sequence TX 0:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr 128)) + 192 =  18433 KB headroom 128 2176
>     Create: Special TX  0:0  - Memory used (MBUFs   64 x (size 2176 + Hdr 128)) + 192 =    145 KB headroom 128 2176
> 
>                                                                        Port memory used =  73873 KB
> Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 0c:c4:7a:c5:43:7a
> 
> ** Default Info (0000:03:00.1, if_index:0) **
>    max_vfs        :   0, min_rx_bufsize    :1024, max_rx_pktlen : 15872
>    max_rx_queues  : 128, max_tx_queues     :  64
>    max_mac_addrs  : 128, max_hash_mac_addrs:4096, max_vmdq_pools:    64
>    rx_offload_capa:  79, tx_offload_capa   : 191, reta_size     :   512, flow_type_rss_offloads:0000000000038d34
>    vmdq_queue_base:   0, vmdq_queue_num    : 128, vmdq_pool_base:     0
> ** RX Conf **
>    pthresh        :   8, hthresh          :   8, wthresh        :     0
>    Free Thresh    :  32, Drop Enable      :   0, Deferred Start :     0
> ** TX Conf **
>    pthresh        :  32, hthresh          :   0, wthresh        :     0
>    Free Thresh    :  32, RS Thresh        :  32, Deferred Start :     0, TXQ Flags:00000f01
> 
>     Create: Default RX  1:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr 128)) + 192 =  18433 KB headroom 128 2176
>       Set RX queue stats mapping pid 1, q 0, lcore 2
> 
> 
>     Create: Default TX  1:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr 128)) + 192 =  18433 KB headroom 128 2176
>     Create: Range TX    1:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr 128)) + 192 =  18433 KB headroom 128 2176
>     Create: Sequence TX 1:0  - Memory used (MBUFs 8192 x (size 2176 + Hdr 128)) + 192 =  18433 KB headroom 128 2176
>     Create: Special TX  1:0  - Memory used (MBUFs   64 x (size 2176 + Hdr 128)) + 192 =    145 KB headroom 128 2176
> 
>                                                                        Port memory used =  73873 KB
> Initialize Port 1 -- TxQ 1, RxQ 1,  Src MAC 0c:c4:7a:c5:43:7b
>                                                                       Total memory used = 147746 KB
> Port  0: Link Down <Enable promiscuous mode>
> [PT]: I don't know why the Port ) went down at this point. Is there any other thing I should check ?
> 
> Port  1: Link Up - speed 10000 Mbps - full-duplex <Enable promiscuous mode>
> 
> pktgen_packet_capture_init: mz->len 4194304
> 
> === Display processing on lcore 0
>   RX processing lcore:   1 rx:  1 tx:  0
> For RX found 1 port(s) for lcore 1
>   RX processing lcore:   2 rx:  1 tx:  0
> For RX found 1 port(s) for lcore 2
>   TX processing lcore:   3 rx:  0 tx:  1
> For TX found 1 port(s) for lcore 3
>   TX processing lcore:   4 rx:  0 tx:  1
> For TX found 1 port(s) for lcore 4
> 
> 
> 
> 
> 
> Thanks for any feedback.
> 
> 
> -- 
> P.T

Regards,
Keith


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2017-07-26 16:44 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-21 15:11 [dpdk-users] [pktgen] pktgen Tx/Rx rates are not the same in loopback testing case Paul Tsvika
2017-07-21 15:40 ` Wiles, Keith
2017-07-22  1:49   ` Paul Tsvika
2017-07-25  9:04     ` Paul Tsvika
2017-07-25 14:10       ` Wiles, Keith
2017-07-26  9:02         ` Paul Tsvika
2017-07-26 14:55           ` Wiles, Keith
2017-07-26 16:44     ` Wiles, Keith

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).