From: Tom Barbette <barbette@kth.se>
Cc: jasvinder.singh@intel.com, cristian.dumitrescu@intel.com,
"users@dpdk.org" <users@dpdk.org>
Subject: [dpdk-users] SoftNIC usage / segfault?
Date: Wed, 29 Apr 2020 16:27:19 +0200 [thread overview]
Message-ID: <616b8561-52b1-c781-0d24-d717d8387f24@kth.se> (raw)
Hi all,
I'm a little bit puzzled by the SoftNIC driver, and cannot make it work
(with DPDK 20.02).
I modified the firmware "LINK" line to use my Mellanox ConnectX 5
(0000:11:00.0). No other changes (also tried in KVM with a virtio-pci
device, no more luck).
According to how I launch testpmd, either I receive nothing, or I segfault.
Note that DPDK is able to use 2 ports on my system (the two ConnectX 5
ports).
1) It seems SoftNIC does not work with whitelist
---
sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -w 11:00.0 --vdev
'net_softnic0,firmware=./drivers/net/softnic/firmware.cli,cpu_id=0,conn_port=8086'
-- -i --forward-mode=softnic --portmask=0x1
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:11:00.0 on NUMA socket 0
EAL: probe driver: 15b3:1017 net_mlx5
Interactive-mode selected
Set softnic packet forwarding mode
previous number of forwarding ports 2 - changed to number of configured
ports 1
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: B8:83:03:6F:43:40
Configuring Port 1 (socket 0)
; SPDX-License-Identifier: BSD-3-Clause
; Copyright(c) 2018 Intel Corporation
link LINK dev 0000:11:00.0
Command "link" failed.
[...]
---
2) If I don't whitelist, I assume the softnic port will be the third. So
I have to use the mask 0x4, right?
Everything seems right, but start/stop shows I received no packet, while
a ping is definitely going on with a back-to-back cable (and when the
DPDK app is killed, I can see counters raising). Did I miss something?
---
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...
---------------------- Forward statistics for port 2
----------------------
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
----------------------------------------------------------------------------
+++++++++++++++ Accumulated forward statistics for all
ports+++++++++++++++
RX-packets: 0 RX-dropped: 0 RX-total: 0
TX-packets: 0 TX-dropped: 0 TX-total: 0
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Done.
---
3) With portmask 0x1 or 0x2, it segfaults:
---
sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 --vdev
'net_softnic0,firmware=./drivers/net/softnic/firmware.cli,cpu_id=0,conn_port=8086'
-- -i --forward-mode=softnic --portmask=0x1
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:00:04.0 on NUMA socket 0
EAL: probe driver: 8086:2021 rawdev_ioat
EAL: PCI device 0000:00:04.1 on NUMA socket 0
EAL: probe driver: 8086:2021 rawdev_ioat
EAL: PCI device 0000:00:04.2 on NUMA socket 0
EAL: probe driver: 8086:2021 rawdev_ioat
EAL: PCI device 0000:00:04.3 on NUMA socket 0
EAL: probe driver: 8086:2021 rawdev_ioat
EAL: PCI device 0000:00:04.4 on NUMA socket 0
EAL: probe driver: 8086:2021 rawdev_ioat
EAL: PCI device 0000:00:04.5 on NUMA socket 0
EAL: probe driver: 8086:2021 rawdev_ioat
EAL: PCI device 0000:00:04.6 on NUMA socket 0
EAL: probe driver: 8086:2021 rawdev_ioat
EAL: PCI device 0000:00:04.7 on NUMA socket 0
EAL: probe driver: 8086:2021 rawdev_ioat
EAL: PCI device 0000:11:00.0 on NUMA socket 0
EAL: probe driver: 15b3:1017 net_mlx5
EAL: PCI device 0000:11:00.1 on NUMA socket 0
EAL: probe driver: 15b3:1017 net_mlx5
Interactive-mode selected
Set softnic packet forwarding mode
previous number of forwarding ports 3 - changed to number of configured
ports 1
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: B8:83:03:6F:43:40
Configuring Port 1 (socket 0)
Port 1: B8:83:03:6F:43:41
Configuring Port 2 (socket 0)
; SPDX-License-Identifier: BSD-3-Clause
; Copyright(c) 2018 Intel Corporation
link LINK dev 0000:11:00.0
pipeline RX period 10 offset_port_id 0
pipeline RX port in bsz 32 link LINK rxq 0
pipeline RX port out bsz 32 swq RXQ0
pipeline RX table match stub
pipeline RX port in 0 table 0
pipeline RX table 0 rule add match default action fwd port 0
pipeline TX period 10 offset_port_id 0
pipeline TX port in bsz 32 swq TXQ0
pipeline TX port out bsz 32 link LINK txq 0
pipeline TX table match stub
pipeline TX port in 0 table 0
pipeline TX table 0 rule add match default action fwd port 0
thread 1 pipeline RX enable
Command "thread pipeline enable" failed.
thread 1 pipeline TX enable
Command "thread pipeline enable" failed.
Port 2: 00:00:00:00:00:00
Checking link statuses...
Done
testpmd> start
softnic packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support
enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
RX P=2/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0) peer=02:00:00:00:00:02
softnic packet forwarding packets/burst=32
nb forwarding cores=1 - nb forwarding ports=1
port 0: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
port 1: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=256 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=256 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
port 2: RX queue number: 1 Tx queue number: 1
Rx offloads=0x0 Tx offloads=0x0
RX queue: 0
RX desc=0 - RX free threshold=0
RX threshold registers: pthresh=0 hthresh=0 wthresh=0
RX Offloads=0x0
TX queue: 0
TX desc=0 - TX free threshold=0
TX threshold registers: pthresh=0 hthresh=0 wthresh=0
TX offloads=0x0 - TX RS bit threshold=0
zsh: segmentation fault sudo ./x86_64-native-linuxapp-gcc/app/testpmd
-c 0x3 --vdev -- -i
---
4) Also, the telnet command shows me no softnic> prompt, and does not
seem to react to anything, except when I quit DPDK the telnet dies
(quite expected):
telnet 127.0.0.1 8086
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
[I can type anything here without reactions]
5) Am I correct to assume that SoftNIC can emulate RSS? I'm looking to
implement RSS-based functional tests for a project.
Thanks!
Tom
next reply other threads:[~2020-04-29 14:27 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-04-29 14:27 Tom Barbette [this message]
2020-04-29 16:21 ` Singh, Jasvinder
2020-04-30 8:30 ` Tom Barbette
2020-04-30 9:51 ` Singh, Jasvinder
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=616b8561-52b1-c781-0d24-d717d8387f24@kth.se \
--to=barbette@kth.se \
--cc=cristian.dumitrescu@intel.com \
--cc=jasvinder.singh@intel.com \
--cc=users@dpdk.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).