From: Stefano Salsano <stefano.salsano@uniroma2.it>
To: users <users@dpdk.org>
Cc: "Wiles, Keith" <keith.wiles@intel.com>
Subject: [dpdk-users] problem using two ports with pktgen-dpdk
Date: Thu, 28 Jun 2018 11:45:53 +0200 [thread overview]
Message-ID: <bdea7182-634c-4983-7df8-daf3ed7363a0@uniroma2.it> (raw)
Hi,
I'm now simply trying to use two ports in a single pktgen-dpdk instance
(maybe I do not need the two instances...), but still I have a problem
Pktgen Ver: 3.5.1 (DPDK 18.08.0-rc0)
Powered by DPDK ** Version: DPDK 18.08.0-rc0
trying to use two ports as follows, it stops with an error (see below)
sudo -E ./app/x86_64-native-linuxapp-gcc/pktgen -l 0-4 -n 3 -- -P -m
"1.0, 2.1"
while using a single port at a time is OK:
sudo -E ./app/x86_64-native-linuxapp-gcc/pktgen -l 0-4 -n 3 -- -P -m "1.0"
sudo -E ./app/x86_64-native-linuxapp-gcc/pktgen -l 0-4 -n 3 -- -P -m "2.1"
any suggestion will be appreciated!
ciao Stefano
This is the error, and the full output is further below
!PANIC!: Cannot create mbuf pool (Range TX 1:0) port 1, queue 0,
nb_mbufs 16384, socket_id 0: No such file or directory
PANIC in pktgen_mbuf_pool_create():
Cannot create mbuf pool (Range TX 1:0) port 1, queue 0, nb_mbufs
16384, socket_id 0: No such file or directory6:
[./app/x86_64-native-linuxapp-gcc/pktgen(_start+0x29) [0x466139]]
5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)
[0x7ffff6cae830]]
4: [./app/x86_64-native-linuxapp-gcc/pktgen(main+0x598) [0x460388]]
3: [./app/x86_64-native-linuxapp-gcc/pktgen(pktgen_config_ports+0x18ad)
[0x491aed]]
2: [./app/x86_64-native-linuxapp-gcc/pktgen(__rte_panic+0xc3) [0x45769b]]
1: [./app/x86_64-native-linuxapp-gcc/pktgen(rte_dump_stack+0x2b) [0x50aadb]]
Aborted
FULL OUTPUT
ssalsano@node-0:~/pktgen-dpdk$ sudo -E
./app/x86_64-native-linuxapp-gcc/pktgen -l 0-4 -n 3 --socket-mem 64,64
-- -P -m "1.0, 2.1"
Copyright (c) <2010-2017>, Intel Corporation. All rights reserved.
Powered by DPDK
EAL: Detected 32 lcore(s)
EAL: Detected 2 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: No free hugepages reported in hugepages-1048576kB
EAL: Probing VFIO support...
EAL: PCI device 0000:01:00.0 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:01:00.1 on NUMA socket 0
EAL: probe driver: 8086:1521 net_e1000_igb
EAL: PCI device 0000:06:00.0 on NUMA socket 0
EAL: probe driver: 8086:10fb net_ixgbe
EAL: PCI device 0000:06:00.1 on NUMA socket 0
EAL: probe driver: 8086:10fb net_ixgbe
Lua 5.3.4 Copyright (C) 1994-2017 Lua.org, PUC-Rio
Copyright (c) <2010-2017>, Intel Corporation. All rights reserved.
Pktgen created by: Keith Wiles -- >>> Powered by DPDK <<<
>>> Packet Burst 64, RX Desc 1024, TX Desc 2048, mbufs/port 16384, mbuf
cache 2048
=== port to lcore mapping table (# lcores 5) ===
lcore: 0 1 2 3 4 Total
port 0: ( D: T) ( 1: 1) ( 0: 0) ( 0: 0) ( 0: 0) = ( 1: 1)
port 1: ( D: T) ( 0: 0) ( 1: 1) ( 0: 0) ( 0: 0) = ( 1: 1)
Total : ( 0: 0) ( 1: 1) ( 1: 1) ( 0: 0) ( 0: 0)
Display and Timer on lcore 0, rx:tx counts per port/lcore
Configuring 2 ports, MBUF Size 2176, MBUF Cache Size 2048
Lcore:
1, RX-TX
RX_cnt( 1): (pid= 0:qid= 0)
TX_cnt( 1): (pid= 0:qid= 0)
2, RX-TX
RX_cnt( 1): (pid= 1:qid= 0)
TX_cnt( 1): (pid= 1:qid= 0)
Port :
0, nb_lcores 1, private 0xe17b80, lcores: 1
1, nb_lcores 1, private 0xe19b90, lcores: 2
** Device Info (0000:06:00.0, if_index:0, flags 00000002) **
min_rx_bufsize : 1024 max_rx_pktlen :15872 hash_key_size : 40
max_rx_queues : 128 max_tx_queues : 64 max_vfs : 0
max_mac_addrs : 127 max_hash_mac_addrs: 4096 max_vmdq_pools: 64
vmdq_queue_base: 0 vmdq_queue_num : 128 vmdq_pool_base: 0
nb_rx_queues : 0 nb_tx_queues : 0 speed_capa :
00000120
flow_type_rss_offloads:0000000000038d34 reta_size : 128
rx_offload_capa :000000000000be9f tx_offload_capa
:0000000000172095
rx_queue_offload_capa :0000000000000001 tx_queue_offload_capa
:0000000000000000
dev_capa :0000000000000000
RX Conf:
pthresh : 8 hthresh : 8 wthresh : 0
Free Thresh : 32 Drop Enable : 0 Deferred Start : 0
offloads :0000000000000000
TX Conf:
pthresh : 32 hthresh : 0 wthresh : 0
Free Thresh : 32 RS Thresh : 32 Deferred Start :
0 TXQ Flags: 00000f01
offloads :0000000000000000
Rx: descriptor Limits
nb_max : 4096 nb_min : 32 nb_align : 8
nb_seg_max : 0 nb_mtu_seg_max : 0
Tx: descriptor Limits
nb_max : 4096 nb_min : 32 nb_align : 8
nb_seg_max : 40 nb_mtu_seg_max : 40
Rx: Port Config
burst_size : 0 ring_size : 0 nb_queues : 0
Tx: Port Config
burst_size : 0 ring_size : 0 nb_queues : 0
Switch Info: (null)
domain_id : 0 port_id : 0
Create: Default RX 0:0 - Memory used (MBUFs 16384 x (size 2176 +
Hdr 128)) + 192 = 36865 KB, headroom 128
Set RX queue stats mapping pid 0, q 0, lcore 1
Create: Default TX 0:0 - Memory used (MBUFs 16384 x (size 2176 +
Hdr 128)) + 192 = 36865 KB, headroom 128
Create: Range TX 0:0 - Memory used (MBUFs 16384 x (size 2176 +
Hdr 128)) + 192 = 36865 KB, headroom 128
Create: Sequence TX 0:0 - Memory used (MBUFs 16384 x (size 2176 +
Hdr 128)) + 192 = 36865 KB, headroom 128
Create: Special TX 0:0 - Memory used (MBUFs 64 x (size 2176 +
Hdr 128)) + 192 = 145 KB, headroom 128
Port memory used = 147605 KB
Initialize Port 0 -- TxQ 1, RxQ 1, Src MAC 90:e2:ba:29:f6:40
** Device Info (0000:06:00.1, if_index:0, flags 00000002) **
min_rx_bufsize : 1024 max_rx_pktlen :15872 hash_key_size : 40
max_rx_queues : 128 max_tx_queues : 64 max_vfs : 0
max_mac_addrs : 127 max_hash_mac_addrs: 4096 max_vmdq_pools: 64
vmdq_queue_base: 0 vmdq_queue_num : 128 vmdq_pool_base: 0
nb_rx_queues : 0 nb_tx_queues : 0 speed_capa :
00000120
flow_type_rss_offloads:0000000000038d34 reta_size : 128
rx_offload_capa :000000000000be9f tx_offload_capa
:0000000000172095
rx_queue_offload_capa :0000000000000001 tx_queue_offload_capa
:0000000000000000
dev_capa :0000000000000000
RX Conf:
pthresh : 8 hthresh : 8 wthresh : 0
Free Thresh : 32 Drop Enable : 0 Deferred Start : 0
offloads :0000000000000000
TX Conf:
pthresh : 32 hthresh : 0 wthresh : 0
Free Thresh : 32 RS Thresh : 32 Deferred Start :
0 TXQ Flags: 00000f01
offloads :0000000000000000
Rx: descriptor Limits
nb_max : 4096 nb_min : 32 nb_align : 8
nb_seg_max : 0 nb_mtu_seg_max : 0
Tx: descriptor Limits
nb_max : 4096 nb_min : 32 nb_align : 8
nb_seg_max : 40 nb_mtu_seg_max : 40
Rx: Port Config
burst_size : 0 ring_size : 0 nb_queues : 0
Tx: Port Config
burst_size : 0 ring_size : 0 nb_queues : 0
Switch Info: (null)
domain_id : 0 port_id : 0
Create: Default RX 1:0 - Memory used (MBUFs 16384 x (size 2176 +
Hdr 128)) + 192 = 36865 KB, headroom 128
Set RX queue stats mapping pid 1, q 0, lcore 2
Create: Default TX 1:0 - Memory used (MBUFs 16384 x (size 2176 +
Hdr 128)) + 192 = 36865 KB, headroom 128
Create: Range TX 1:0 - Memory used (MBUFs 16384 x (size 2176 +
Hdr 128)) + 192 = 36865 KB, headroom 128
!PANIC!: Cannot create mbuf pool (Range TX 1:0) port 1, queue 0,
nb_mbufs 16384, socket_id 0: No such file or directory
PANIC in pktgen_mbuf_pool_create():
Cannot create mbuf pool (Range TX 1:0) port 1, queue 0, nb_mbufs
16384, socket_id 0: No such file or directory6:
[./app/x86_64-native-linuxapp-gcc/pktgen(_start+0x29) [0x466139]]
5: [/lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf0)
[0x7ffff6cae830]]
4: [./app/x86_64-native-linuxapp-gcc/pktgen(main+0x598) [0x460388]]
3: [./app/x86_64-native-linuxapp-gcc/pktgen(pktgen_config_ports+0x18ad)
[0x491aed]]
2: [./app/x86_64-native-linuxapp-gcc/pktgen(__rte_panic+0xc3) [0x45769b]]
1: [./app/x86_64-native-linuxapp-gcc/pktgen(rte_dump_stack+0x2b) [0x50aadb]]
Aborted
--
*******************************************************************
Stefano Salsano
Professore Associato
Dipartimento Ingegneria Elettronica
Universita' di Roma Tor Vergata
Viale Politecnico, 1 - 00133 Roma - ITALY
http://netgroup.uniroma2.it/Stefano_Salsano/
E-mail : stefano.salsano@uniroma2.it
Cell. : +39 320 4307310
Office : (Tel.) +39 06 72597770 (Fax.) +39 06 72597435
*******************************************************************
next reply other threads:[~2018-06-28 9:46 UTC|newest]
Thread overview: 3+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-06-28 9:45 Stefano Salsano [this message]
2018-06-28 13:02 ` Wiles, Keith
2018-06-28 21:09 ` [dpdk-users] problem using two ports with pktgen-dpdk (solved) Stefano Salsano
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=bdea7182-634c-4983-7df8-daf3ed7363a0@uniroma2.it \
--to=stefano.salsano@uniroma2.it \
--cc=keith.wiles@intel.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).