DPDK usage discussions
 help / color / mirror / Atom feed
* [dpdk-users] problem using two ports with pktgen-dpdk
@ 2018-06-28  9:45 Stefano Salsano
  2018-06-28 13:02 ` Wiles, Keith
  0 siblings, 1 reply; 3+ messages in thread
From: Stefano Salsano @ 2018-06-28  9:45 UTC (permalink / raw)
  To: users; +Cc: Wiles, Keith

Hi,

I'm now simply trying to use two ports in a single pktgen-dpdk instance
(maybe I do not need the two instances...), but still I have a problem

Pktgen Ver: 3.5.1 (DPDK 18.08.0-rc0)
Powered by DPDK ** Version: DPDK 18.08.0-rc0

trying to use two ports as follows, it stops with an error (see below)

sudo -E ./app/x86_64-native-linuxapp-gcc/pktgen  -l 0-4 -n 3 -- -P -m 
"1.0, 2.1"

while using a single port at a time is OK:

sudo -E ./app/x86_64-native-linuxapp-gcc/pktgen  -l 0-4 -n 3  -- -P -m "1.0"

sudo -E ./app/x86_64-native-linuxapp-gcc/pktgen  -l 0-4 -n 3  -- -P -m "2.1"

any suggestion will be appreciated!

ciao Stefano

This is the error, and the full output is further below

!PANIC!: Cannot create mbuf pool (Range TX    1:0) port 1, queue 0, 
nb_mbufs 16384, socket_id 0: No such file or directory
PANIC in pktgen_mbuf_pool_create():
Cannot create mbuf pool (Range TX    1:0) port 1, queue 0, nb_mbufs 
16384, socket_id 0: No such file or directory6: 
[./app/x86_64-native-linuxapp-gcc/pktgen(_start+0x29) [0x466139]]
5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) 
[0x7ffff6cae830]]
4: [./app/x86_64-native-linuxapp-gcc/pktgen(main+0x598) [0x460388]]
3: [./app/x86_64-native-linuxapp-gcc/pktgen(pktgen_config_ports+0x18ad) 
[0x491aed]]
2: [./app/x86_64-native-linuxapp-gcc/pktgen(__rte_panic+0xc3) [0x45769b]]
1: [./app/x86_64-native-linuxapp-gcc/pktgen(rte_dump_stack+0x2b) [0x50aadb]]
Aborted

FULL OUTPUT

ssalsano@node-0:~/pktgen-dpdk$ sudo -E 
./app/x86_64-native-linuxapp-gcc/pktgen  -l 0-4 -n 3 --socket-mem 64,64 
-- -P -m "1.0, 2.1"

Copyright (c) <2010-2017>, Intel Corporation. All rights reserved. 
Powered by DPDK
EAL: Detected 32 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:01:00.1 on NUMA socket 0
EAL:   probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL:   probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:06:00.1 on NUMA socket 0
EAL:   probe driver: 8086:10fb net_ixgbe
Lua 5.3.4  Copyright (C) 1994-2017 Lua.org, PUC-Rio
    Copyright (c) <2010-2017>, Intel Corporation. All rights reserved.
    Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<<

 >>> Packet Burst 64, RX Desc 1024, TX Desc 2048, mbufs/port 16384, mbuf 
cache 2048

=== port to lcore mapping table (# lcores 5) ===
    lcore:    0       1       2       3       4      Total
port   0: ( D: T) ( 1: 1) ( 0: 0) ( 0: 0) ( 0: 0) = ( 1: 1)
port   1: ( D: T) ( 0: 0) ( 1: 1) ( 0: 0) ( 0: 0) = ( 1: 1)
Total   : ( 0: 0) ( 1: 1) ( 1: 1) ( 0: 0) ( 0: 0)
   Display and Timer on lcore 0, rx:tx counts per port/lcore

Configuring 2 ports, MBUF Size 2176, MBUF Cache Size 2048
Lcore:
     1, RX-TX
                 RX_cnt( 1): (pid= 0:qid= 0)
                 TX_cnt( 1): (pid= 0:qid= 0)
     2, RX-TX
                 RX_cnt( 1): (pid= 1:qid= 0)
                 TX_cnt( 1): (pid= 1:qid= 0)

Port :
     0, nb_lcores  1, private 0xe17b80, lcores:  1
     1, nb_lcores  1, private 0xe19b90, lcores:  2


** Device Info (0000:06:00.0, if_index:0, flags 00000002) **
    min_rx_bufsize : 1024  max_rx_pktlen     :15872  hash_key_size :   40
    max_rx_queues  :  128  max_tx_queues     :   64  max_vfs       :    0
    max_mac_addrs  :  127  max_hash_mac_addrs: 4096  max_vmdq_pools:   64
    vmdq_queue_base:    0  vmdq_queue_num    :  128  vmdq_pool_base:    0
    nb_rx_queues   :    0  nb_tx_queues      :    0  speed_capa    : 
00000120

    flow_type_rss_offloads:0000000000038d34  reta_size             :  128
    rx_offload_capa       :000000000000be9f  tx_offload_capa 
:0000000000172095
    rx_queue_offload_capa :0000000000000001  tx_queue_offload_capa 
:0000000000000000
    dev_capa              :0000000000000000

   RX Conf:
      pthresh        :    8 hthresh          :    8 wthresh        :    0
      Free Thresh    :   32 Drop Enable      :    0 Deferred Start :    0
      offloads       :0000000000000000
   TX Conf:
      pthresh        :   32 hthresh          :    0 wthresh        :    0
      Free Thresh    :   32 RS Thresh        :   32 Deferred Start : 
0  TXQ Flags: 00000f01
      offloads       :0000000000000000
   Rx: descriptor Limits
      nb_max         : 4096  nb_min          :   32  nb_align      :    8
      nb_seg_max     :    0  nb_mtu_seg_max  :    0
   Tx: descriptor Limits
      nb_max         : 4096  nb_min          :   32  nb_align      :    8
      nb_seg_max     :   40  nb_mtu_seg_max  :   40
   Rx: Port Config
      burst_size     :    0  ring_size       :    0  nb_queues     :    0
   Tx: Port Config
      burst_size     :    0  ring_size       :    0  nb_queues     :    0
   Switch Info: (null)
      domain_id      :    0  port_id         :    0

     Create: Default RX  0:0  - Memory used (MBUFs 16384 x (size 2176 + 
Hdr 128)) + 192 =  36865 KB, headroom 128
       Set RX queue stats mapping pid 0, q 0, lcore 1


     Create: Default TX  0:0  - Memory used (MBUFs 16384 x (size 2176 + 
Hdr 128)) + 192 =  36865 KB, headroom 128
     Create: Range TX    0:0  - Memory used (MBUFs 16384 x (size 2176 + 
Hdr 128)) + 192 =  36865 KB, headroom 128
     Create: Sequence TX 0:0  - Memory used (MBUFs 16384 x (size 2176 + 
Hdr 128)) + 192 =  36865 KB, headroom 128
     Create: Special TX  0:0  - Memory used (MBUFs    64 x (size 2176 + 
Hdr 128)) + 192 =    145 KB, headroom 128

 
Port memory used = 147605 KB
Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:29:f6:40
** Device Info (0000:06:00.1, if_index:0, flags 00000002) **
    min_rx_bufsize : 1024  max_rx_pktlen     :15872  hash_key_size :   40
    max_rx_queues  :  128  max_tx_queues     :   64  max_vfs       :    0
    max_mac_addrs  :  127  max_hash_mac_addrs: 4096  max_vmdq_pools:   64
    vmdq_queue_base:    0  vmdq_queue_num    :  128  vmdq_pool_base:    0
    nb_rx_queues   :    0  nb_tx_queues      :    0  speed_capa    : 
00000120

    flow_type_rss_offloads:0000000000038d34  reta_size             :  128
    rx_offload_capa       :000000000000be9f  tx_offload_capa 
:0000000000172095
    rx_queue_offload_capa :0000000000000001  tx_queue_offload_capa 
:0000000000000000
    dev_capa              :0000000000000000

   RX Conf:
      pthresh        :    8 hthresh          :    8 wthresh        :    0
      Free Thresh    :   32 Drop Enable      :    0 Deferred Start :    0
      offloads       :0000000000000000
   TX Conf:
      pthresh        :   32 hthresh          :    0 wthresh        :    0
      Free Thresh    :   32 RS Thresh        :   32 Deferred Start : 
0  TXQ Flags: 00000f01
      offloads       :0000000000000000
   Rx: descriptor Limits
      nb_max         : 4096  nb_min          :   32  nb_align      :    8
      nb_seg_max     :    0  nb_mtu_seg_max  :    0
   Tx: descriptor Limits
      nb_max         : 4096  nb_min          :   32  nb_align      :    8
      nb_seg_max     :   40  nb_mtu_seg_max  :   40
   Rx: Port Config
      burst_size     :    0  ring_size       :    0  nb_queues     :    0
   Tx: Port Config
      burst_size     :    0  ring_size       :    0  nb_queues     :    0
   Switch Info: (null)
      domain_id      :    0  port_id         :    0

     Create: Default RX  1:0  - Memory used (MBUFs 16384 x (size 2176 + 
Hdr 128)) + 192 =  36865 KB, headroom 128
       Set RX queue stats mapping pid 1, q 0, lcore 2


     Create: Default TX  1:0  - Memory used (MBUFs 16384 x (size 2176 + 
Hdr 128)) + 192 =  36865 KB, headroom 128
     Create: Range TX    1:0  - Memory used (MBUFs 16384 x (size 2176 + 
Hdr 128)) + 192 =  36865 KB, headroom 128
!PANIC!: Cannot create mbuf pool (Range TX    1:0) port 1, queue 0, 
nb_mbufs 16384, socket_id 0: No such file or directory
PANIC in pktgen_mbuf_pool_create():
Cannot create mbuf pool (Range TX    1:0) port 1, queue 0, nb_mbufs 
16384, socket_id 0: No such file or directory6: 
[./app/x86_64-native-linuxapp-gcc/pktgen(_start+0x29) [0x466139]]
5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) 
[0x7ffff6cae830]]
4: [./app/x86_64-native-linuxapp-gcc/pktgen(main+0x598) [0x460388]]
3: [./app/x86_64-native-linuxapp-gcc/pktgen(pktgen_config_ports+0x18ad) 
[0x491aed]]
2: [./app/x86_64-native-linuxapp-gcc/pktgen(__rte_panic+0xc3) [0x45769b]]
1: [./app/x86_64-native-linuxapp-gcc/pktgen(rte_dump_stack+0x2b) [0x50aadb]]
Aborted


-- 
*******************************************************************
Stefano Salsano
Professore Associato
Dipartimento Ingegneria Elettronica
Universita' di Roma Tor Vergata
Viale Politecnico, 1 - 00133 Roma - ITALY

http://netgroup.uniroma2.it/Stefano_Salsano/

E-mail  : stefano.salsano@uniroma2.it
Cell.   : +39 320 4307310
Office  : (Tel.) +39 06 72597770 (Fax.) +39 06 72597435
*******************************************************************

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] problem using two ports with pktgen-dpdk
  2018-06-28  9:45 [dpdk-users] problem using two ports with pktgen-dpdk Stefano Salsano
@ 2018-06-28 13:02 ` Wiles, Keith
  2018-06-28 21:09   ` [dpdk-users] problem using two ports with pktgen-dpdk (solved) Stefano Salsano
  0 siblings, 1 reply; 3+ messages in thread
From: Wiles, Keith @ 2018-06-28 13:02 UTC (permalink / raw)
  To: Stefano Salsano; +Cc: users



> On Jun 28, 2018, at 4:45 AM, Stefano Salsano <stefano.salsano@uniroma2.it> wrote:
> 
> Hi,
> 
> I'm now simply trying to use two ports in a single pktgen-dpdk instance
> (maybe I do not need the two instances...), but still I have a problem
> 
> Pktgen Ver: 3.5.1 (DPDK 18.08.0-rc0)
> Powered by DPDK ** Version: DPDK 18.08.0-rc0


OK now I see the pktgen and dpdk versions.

> 
> trying to use two ports as follows, it stops with an error (see below)
> 
> sudo -E ./app/x86_64-native-linuxapp-gcc/pktgen  -l 0-4 -n 3 -- -P -m "1.0, 2.1"
> 
> while using a single port at a time is OK:
> 
> sudo -E ./app/x86_64-native-linuxapp-gcc/pktgen  -l 0-4 -n 3  -- -P -m "1.0"
> 
> sudo -E ./app/x86_64-native-linuxapp-gcc/pktgen  -l 0-4 -n 3  -- -P -m "2.1"
> 
> any suggestion will be appreciated!

Another place to look is in the pktgen/cfg directory at files master.cfg and slave.cfg (bad names as they are not master and slave processes) In these files are python data statements and are used by pktgen/tools/dpdk-run.py command as so.

$ ./tools/dpdk-run.py master

$ ./tools/dpdk-run.py slave

You can copy these files and edit them for your use call them say pktgen-1.cfg and pktgen-2.cfg then run 

$ ./tools/dpdk-run.py pktgen-1

$ ./tools/dpdk-run.py pktgen-2

I hope that helps.

> 
> ciao Stefano
> 
> This is the error, and the full output is further below
> 
> !PANIC!: Cannot create mbuf pool (Range TX    1:0) port 1, queue 0, nb_mbufs 16384, socket_id 0: No such file or directory
> PANIC in pktgen_mbuf_pool_create():
> Cannot create mbuf pool (Range TX    1:0) port 1, queue 0, nb_mbufs 16384, socket_id 0: No such file or directory6: [./app/x86_64-native-linuxapp-gcc/pktgen(_start+0x29) [0x466139]]
> 5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x7ffff6cae830]]
> 4: [./app/x86_64-native-linuxapp-gcc/pktgen(main+0x598) [0x460388]]
> 3: [./app/x86_64-native-linuxapp-gcc/pktgen(pktgen_config_ports+0x18ad) [0x491aed]]
> 2: [./app/x86_64-native-linuxapp-gcc/pktgen(__rte_panic+0xc3) [0x45769b]]
> 1: [./app/x86_64-native-linuxapp-gcc/pktgen(rte_dump_stack+0x2b) [0x50aadb]]
> Aborted
> 
> FULL OUTPUT
> 
> ssalsano@node-0:~/pktgen-dpdk$ sudo -E ./app/x86_64-native-linuxapp-gcc/pktgen  -l 0-4 -n 3 --socket-mem 64,64 -- -P -m "1.0, 2.1"
> 
> Copyright (c) <2010-2017>, Intel Corporation. All rights reserved. Powered by DPDK
> EAL: Detected 32 lcore(s)
> EAL: Detected 2 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: No free hugepages reported in hugepages-1048576kB
> EAL: Probing VFIO support...
> EAL: PCI device 0000:01:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> EAL: PCI device 0000:01:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:1521 net_e1000_igb
> EAL: PCI device 0000:06:00.0 on NUMA socket 0
> EAL:   probe driver: 8086:10fb net_ixgbe
> EAL: PCI device 0000:06:00.1 on NUMA socket 0
> EAL:   probe driver: 8086:10fb net_ixgbe
> Lua 5.3.4  Copyright (C) 1994-2017 Lua.org, PUC-Rio
>   Copyright (c) <2010-2017>, Intel Corporation. All rights reserved.
>   Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<<
> 
> >>> Packet Burst 64, RX Desc 1024, TX Desc 2048, mbufs/port 16384, mbuf cache 2048
> 
> === port to lcore mapping table (# lcores 5) ===
>   lcore:    0       1       2       3       4      Total
> port   0: ( D: T) ( 1: 1) ( 0: 0) ( 0: 0) ( 0: 0) = ( 1: 1)
> port   1: ( D: T) ( 0: 0) ( 1: 1) ( 0: 0) ( 0: 0) = ( 1: 1)
> Total   : ( 0: 0) ( 1: 1) ( 1: 1) ( 0: 0) ( 0: 0)
>  Display and Timer on lcore 0, rx:tx counts per port/lcore
> 
> Configuring 2 ports, MBUF Size 2176, MBUF Cache Size 2048
> Lcore:
>    1, RX-TX
>                RX_cnt( 1): (pid= 0:qid= 0)
>                TX_cnt( 1): (pid= 0:qid= 0)
>    2, RX-TX
>                RX_cnt( 1): (pid= 1:qid= 0)
>                TX_cnt( 1): (pid= 1:qid= 0)
> 
> Port :
>    0, nb_lcores  1, private 0xe17b80, lcores:  1
>    1, nb_lcores  1, private 0xe19b90, lcores:  2
> 
> 
> ** Device Info (0000:06:00.0, if_index:0, flags 00000002) **
>   min_rx_bufsize : 1024  max_rx_pktlen     :15872  hash_key_size :   40
>   max_rx_queues  :  128  max_tx_queues     :   64  max_vfs       :    0
>   max_mac_addrs  :  127  max_hash_mac_addrs: 4096  max_vmdq_pools:   64
>   vmdq_queue_base:    0  vmdq_queue_num    :  128  vmdq_pool_base:    0
>   nb_rx_queues   :    0  nb_tx_queues      :    0  speed_capa    : 00000120
> 
>   flow_type_rss_offloads:0000000000038d34  reta_size             :  128
>   rx_offload_capa       :000000000000be9f  tx_offload_capa :0000000000172095
>   rx_queue_offload_capa :0000000000000001  tx_queue_offload_capa :0000000000000000
>   dev_capa              :0000000000000000
> 
>  RX Conf:
>     pthresh        :    8 hthresh          :    8 wthresh        :    0
>     Free Thresh    :   32 Drop Enable      :    0 Deferred Start :    0
>     offloads       :0000000000000000
>  TX Conf:
>     pthresh        :   32 hthresh          :    0 wthresh        :    0
>     Free Thresh    :   32 RS Thresh        :   32 Deferred Start : 0  TXQ Flags: 00000f01
>     offloads       :0000000000000000
>  Rx: descriptor Limits
>     nb_max         : 4096  nb_min          :   32  nb_align      :    8
>     nb_seg_max     :    0  nb_mtu_seg_max  :    0
>  Tx: descriptor Limits
>     nb_max         : 4096  nb_min          :   32  nb_align      :    8
>     nb_seg_max     :   40  nb_mtu_seg_max  :   40
>  Rx: Port Config
>     burst_size     :    0  ring_size       :    0  nb_queues     :    0
>  Tx: Port Config
>     burst_size     :    0  ring_size       :    0  nb_queues     :    0
>  Switch Info: (null)
>     domain_id      :    0  port_id         :    0
> 
>    Create: Default RX  0:0  - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 =  36865 KB, headroom 128
>      Set RX queue stats mapping pid 0, q 0, lcore 1
> 
> 
>    Create: Default TX  0:0  - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 =  36865 KB, headroom 128
>    Create: Range TX    0:0  - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 =  36865 KB, headroom 128
>    Create: Sequence TX 0:0  - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 =  36865 KB, headroom 128
>    Create: Special TX  0:0  - Memory used (MBUFs    64 x (size 2176 + Hdr 128)) + 192 =    145 KB, headroom 128
> 
> Port memory used = 147605 KB
> Initialize Port 0 -- TxQ 1, RxQ 1,  Src MAC 90:e2:ba:29:f6:40
> ** Device Info (0000:06:00.1, if_index:0, flags 00000002) **
>   min_rx_bufsize : 1024  max_rx_pktlen     :15872  hash_key_size :   40
>   max_rx_queues  :  128  max_tx_queues     :   64  max_vfs       :    0
>   max_mac_addrs  :  127  max_hash_mac_addrs: 4096  max_vmdq_pools:   64
>   vmdq_queue_base:    0  vmdq_queue_num    :  128  vmdq_pool_base:    0
>   nb_rx_queues   :    0  nb_tx_queues      :    0  speed_capa    : 00000120
> 
>   flow_type_rss_offloads:0000000000038d34  reta_size             :  128
>   rx_offload_capa       :000000000000be9f  tx_offload_capa :0000000000172095
>   rx_queue_offload_capa :0000000000000001  tx_queue_offload_capa :0000000000000000
>   dev_capa              :0000000000000000
> 
>  RX Conf:
>     pthresh        :    8 hthresh          :    8 wthresh        :    0
>     Free Thresh    :   32 Drop Enable      :    0 Deferred Start :    0
>     offloads       :0000000000000000
>  TX Conf:
>     pthresh        :   32 hthresh          :    0 wthresh        :    0
>     Free Thresh    :   32 RS Thresh        :   32 Deferred Start : 0  TXQ Flags: 00000f01
>     offloads       :0000000000000000
>  Rx: descriptor Limits
>     nb_max         : 4096  nb_min          :   32  nb_align      :    8
>     nb_seg_max     :    0  nb_mtu_seg_max  :    0
>  Tx: descriptor Limits
>     nb_max         : 4096  nb_min          :   32  nb_align      :    8
>     nb_seg_max     :   40  nb_mtu_seg_max  :   40
>  Rx: Port Config
>     burst_size     :    0  ring_size       :    0  nb_queues     :    0
>  Tx: Port Config
>     burst_size     :    0  ring_size       :    0  nb_queues     :    0
>  Switch Info: (null)
>     domain_id      :    0  port_id         :    0
> 
>    Create: Default RX  1:0  - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 =  36865 KB, headroom 128
>      Set RX queue stats mapping pid 1, q 0, lcore 2
> 
> 
>    Create: Default TX  1:0  - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 =  36865 KB, headroom 128
>    Create: Range TX    1:0  - Memory used (MBUFs 16384 x (size 2176 + Hdr 128)) + 192 =  36865 KB, headroom 128
> !PANIC!: Cannot create mbuf pool (Range TX    1:0) port 1, queue 0, nb_mbufs 16384, socket_id 0: No such file or directory
> PANIC in pktgen_mbuf_pool_create():
> Cannot create mbuf pool (Range TX    1:0) port 1, queue 0, nb_mbufs 16384, socket_id 0: No such file or directory6: [./app/x86_64-native-linuxapp-gcc/pktgen(_start+0x29) [0x466139]]
> 5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0) [0x7ffff6cae830]]
> 4: [./app/x86_64-native-linuxapp-gcc/pktgen(main+0x598) [0x460388]]
> 3: [./app/x86_64-native-linuxapp-gcc/pktgen(pktgen_config_ports+0x18ad) [0x491aed]]
> 2: [./app/x86_64-native-linuxapp-gcc/pktgen(__rte_panic+0xc3) [0x45769b]]
> 1: [./app/x86_64-native-linuxapp-gcc/pktgen(rte_dump_stack+0x2b) [0x50aadb]]
> Aborted
> 
> 
> -- 
> *******************************************************************
> Stefano Salsano
> Professore Associato
> Dipartimento Ingegneria Elettronica
> Universita' di Roma Tor Vergata
> Viale Politecnico, 1 - 00133 Roma - ITALY
> 
> http://netgroup.uniroma2.it/Stefano_Salsano/
> 
> E-mail  : stefano.salsano@uniroma2.it
> Cell.   : +39 320 4307310
> Office  : (Tel.) +39 06 72597770 (Fax.) +39 06 72597435
> *******************************************************************
> 

Regards,
Keith

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [dpdk-users] problem using two ports with pktgen-dpdk (solved)
  2018-06-28 13:02 ` Wiles, Keith
@ 2018-06-28 21:09   ` Stefano Salsano
  0 siblings, 0 replies; 3+ messages in thread
From: Stefano Salsano @ 2018-06-28 21:09 UTC (permalink / raw)
  To: Wiles, Keith; +Cc: users

Hi Keith,

thanks for your support, we have understood the root of the problem
we simply had to increase the vm.nr_hugepages in /etc/sysctl.conf from 
256 to 2048

this way, we were able to use two ports in the same pktgen instance as 
well as to run two separate primary instances of pktgen

ciao
Stefano


Il 2018-06-28 15:02, Wiles, Keith ha scritto:
>> On Jun 28, 2018, at 4:45 AM, Stefano Salsano <stefano.salsano@uniroma2.it> wrote:
>>
>> I'm now simply trying to use two ports in a single pktgen-dpdk instance
>> (maybe I do not need the two instances...), but still I have a problem
>>
>>
>> trying to use two ports as follows, it stops with an error (see below)
>>
>> sudo -E ./app/x86_64-native-linuxapp-gcc/pktgen  -l 0-4 -n 3 -- -P -m "1.0, 2.1"
>>
>> while using a single port at a time is OK:
>>
>> sudo -E ./app/x86_64-native-linuxapp-gcc/pktgen  -l 0-4 -n 3  -- -P -m "1.0"
>>
>> sudo -E ./app/x86_64-native-linuxapp-gcc/pktgen  -l 0-4 -n 3  -- -P -m "2.1"
>>
>> any suggestion will be appreciated!
> 
> Another place to look is in the pktgen/cfg directory at files master.cfg and slave.cfg (bad names as they are not master and slave processes) In these files are python data statements and are used by pktgen/tools/dpdk-run.py command as so.
> 
> $ ./tools/dpdk-run.py master
> 
> $ ./tools/dpdk-run.py slave
> 
> You can copy these files and edit them for your use call them say pktgen-1.cfg and pktgen-2.cfg then run
> 
> $ ./tools/dpdk-run.py pktgen-1
> 
> $ ./tools/dpdk-run.py pktgen-2
> 
> I hope that helps.
> 
>>


-- 
*******************************************************************
Stefano Salsano
Professore Associato
Dipartimento Ingegneria Elettronica
Universita' di Roma Tor Vergata
Viale Politecnico, 1 - 00133 Roma - ITALY

http://netgroup.uniroma2.it/Stefano_Salsano/

E-mail  : stefano.salsano@uniroma2.it
Cell.   : +39 320 4307310
Office  : (Tel.) +39 06 72597770 (Fax.) +39 06 72597435
*******************************************************************

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2018-06-28 21:09 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-28  9:45 [dpdk-users] problem using two ports with pktgen-dpdk Stefano Salsano
2018-06-28 13:02 ` Wiles, Keith
2018-06-28 21:09   ` [dpdk-users] problem using two ports with pktgen-dpdk (solved) Stefano Salsano

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).