DPDK usage discussions
 help / color / Atom feed
* [dpdk-users] SoftNIC usage / segfault?
@ 2020-04-29 14:27 Tom Barbette
  2020-04-29 16:21 ` Singh, Jasvinder
  0 siblings, 1 reply; 4+ messages in thread
From: Tom Barbette @ 2020-04-29 14:27 UTC (permalink / raw)
  Cc: jasvinder.singh, cristian.dumitrescu, users

Hi all,

I'm a little bit puzzled by the SoftNIC driver, and cannot make it work 
(with DPDK 20.02).

I modified the firmware "LINK" line to use my Mellanox ConnectX 5 
(0000:11:00.0). No other changes (also tried in KVM with a virtio-pci 
device, no more luck).

According to how I launch testpmd, either I receive nothing, or I segfault.

Note that DPDK is able to use 2 ports on my system (the two ConnectX 5 
ports).

1) It seems SoftNIC does not work with whitelist
---
sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -w 11:00.0 --vdev 
'net_softnic0,firmware=./drivers/net/softnic/firmware.cli,cpu_id=0,conn_port=8086' 
-- -i --forward-mode=softnic --portmask=0x1
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:11:00.0 on NUMA socket 0
EAL:   probe driver: 15b3:1017 net_mlx5
Interactive-mode selected
Set softnic packet forwarding mode
previous number of forwarding ports 2 - changed to number of configured 
ports 1
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, 
size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: B8:83:03:6F:43:40
Configuring Port 1 (socket 0)
; SPDX-License-Identifier: BSD-3-Clause
; Copyright(c) 2018 Intel Corporation

link LINK dev 0000:11:00.0
Command "link" failed.
[...]
---

2) If I don't whitelist, I assume the softnic port will be the third. So 
I have to use the mask 0x4, right?

Everything seems right, but start/stop shows I received no packet, while 
a ping is definitely going on with a back-to-back cable (and when the 
DPDK app is killed, I can see counters raising). Did I miss something?
---
testpmd> stop
Telling cores to stop...
Waiting for lcores to finish...

   ---------------------- Forward statistics for port 2 
----------------------
   RX-packets: 0              RX-dropped: 0             RX-total: 0
   TX-packets: 0              TX-dropped: 0             TX-total: 0
 
----------------------------------------------------------------------------

   +++++++++++++++ Accumulated forward statistics for all 
ports+++++++++++++++
   RX-packets: 0              RX-dropped: 0             RX-total: 0
   TX-packets: 0              TX-dropped: 0             TX-total: 0
 
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Done.

---

3) With portmask 0x1 or 0x2, it segfaults:
---
sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 --vdev 
'net_softnic0,firmware=./drivers/net/softnic/firmware.cli,cpu_id=0,conn_port=8086' 
-- -i --forward-mode=softnic --portmask=0x1
EAL: Detected 16 lcore(s)
EAL: Detected 1 NUMA nodes
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probing VFIO support...
EAL: VFIO support initialized
EAL: PCI device 0000:00:04.0 on NUMA socket 0
EAL:   probe driver: 8086:2021 rawdev_ioat
EAL: PCI device 0000:00:04.1 on NUMA socket 0
EAL:   probe driver: 8086:2021 rawdev_ioat
EAL: PCI device 0000:00:04.2 on NUMA socket 0
EAL:   probe driver: 8086:2021 rawdev_ioat
EAL: PCI device 0000:00:04.3 on NUMA socket 0
EAL:   probe driver: 8086:2021 rawdev_ioat
EAL: PCI device 0000:00:04.4 on NUMA socket 0
EAL:   probe driver: 8086:2021 rawdev_ioat
EAL: PCI device 0000:00:04.5 on NUMA socket 0
EAL:   probe driver: 8086:2021 rawdev_ioat
EAL: PCI device 0000:00:04.6 on NUMA socket 0
EAL:   probe driver: 8086:2021 rawdev_ioat
EAL: PCI device 0000:00:04.7 on NUMA socket 0
EAL:   probe driver: 8086:2021 rawdev_ioat
EAL: PCI device 0000:11:00.0 on NUMA socket 0
EAL:   probe driver: 15b3:1017 net_mlx5
EAL: PCI device 0000:11:00.1 on NUMA socket 0
EAL:   probe driver: 15b3:1017 net_mlx5
Interactive-mode selected
Set softnic packet forwarding mode
previous number of forwarding ports 3 - changed to number of configured 
ports 1
testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456, 
size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: B8:83:03:6F:43:40
Configuring Port 1 (socket 0)
Port 1: B8:83:03:6F:43:41
Configuring Port 2 (socket 0)
; SPDX-License-Identifier: BSD-3-Clause
; Copyright(c) 2018 Intel Corporation

link LINK dev 0000:11:00.0

pipeline RX period 10 offset_port_id 0
pipeline RX port in bsz 32 link LINK rxq 0
pipeline RX port out bsz 32 swq RXQ0
pipeline RX table match stub
pipeline RX port in 0 table 0
pipeline RX table 0 rule add match default action fwd port 0

pipeline TX period 10 offset_port_id 0
pipeline TX port in bsz 32 swq TXQ0
pipeline TX port out bsz 32 link LINK txq 0
pipeline TX table match stub
pipeline TX port in 0 table 0
pipeline TX table 0 rule add match default action fwd port 0

thread 1 pipeline RX enable
Command "thread pipeline enable" failed.
thread 1 pipeline TX enable
Command "thread pipeline enable" failed.
Port 2: 00:00:00:00:00:00
Checking link statuses...
Done
testpmd> start
softnic packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support 
enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
   RX P=2/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0) peer=02:00:00:00:00:02

   softnic packet forwarding packets/burst=32
   nb forwarding cores=1 - nb forwarding ports=1
   port 0: RX queue number: 1 Tx queue number: 1
     Rx offloads=0x0 Tx offloads=0x0
     RX queue: 0
       RX desc=256 - RX free threshold=0
       RX threshold registers: pthresh=0 hthresh=0  wthresh=0
       RX Offloads=0x0
     TX queue: 0
       TX desc=256 - TX free threshold=0
       TX threshold registers: pthresh=0 hthresh=0  wthresh=0
       TX offloads=0x0 - TX RS bit threshold=0
   port 1: RX queue number: 1 Tx queue number: 1
     Rx offloads=0x0 Tx offloads=0x0
     RX queue: 0
       RX desc=256 - RX free threshold=0
       RX threshold registers: pthresh=0 hthresh=0  wthresh=0
       RX Offloads=0x0
     TX queue: 0
       TX desc=256 - TX free threshold=0
       TX threshold registers: pthresh=0 hthresh=0  wthresh=0
       TX offloads=0x0 - TX RS bit threshold=0
   port 2: RX queue number: 1 Tx queue number: 1
     Rx offloads=0x0 Tx offloads=0x0
     RX queue: 0
       RX desc=0 - RX free threshold=0
       RX threshold registers: pthresh=0 hthresh=0  wthresh=0
       RX Offloads=0x0
     TX queue: 0
       TX desc=0 - TX free threshold=0
       TX threshold registers: pthresh=0 hthresh=0  wthresh=0
       TX offloads=0x0 - TX RS bit threshold=0
zsh: segmentation fault  sudo ./x86_64-native-linuxapp-gcc/app/testpmd 
-c 0x3 --vdev  -- -i
---

4) Also, the telnet command shows me no softnic> prompt, and does not 
seem to react to anything, except when I quit DPDK the telnet dies 
(quite expected):
telnet 127.0.0.1 8086
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
[I can type anything here without reactions]


5) Am I correct to assume that SoftNIC can emulate RSS? I'm looking to 
implement RSS-based functional tests for a project.

Thanks!

Tom



^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] SoftNIC usage / segfault?
  2020-04-29 14:27 [dpdk-users] SoftNIC usage / segfault? Tom Barbette
@ 2020-04-29 16:21 ` Singh, Jasvinder
  2020-04-30  8:30   ` Tom Barbette
  0 siblings, 1 reply; 4+ messages in thread
From: Singh, Jasvinder @ 2020-04-29 16:21 UTC (permalink / raw)
  To: Tom Barbette; +Cc: Dumitrescu, Cristian, users



> -----Original Message-----
> From: Tom Barbette <barbette@kth.se>
> Sent: Wednesday, April 29, 2020 3:27 PM
> Cc: Singh, Jasvinder <jasvinder.singh@intel.com>; Dumitrescu, Cristian
> <cristian.dumitrescu@intel.com>; users@dpdk.org
> Subject: SoftNIC usage / segfault?
> 
> Hi all,
> 
> I'm a little bit puzzled by the SoftNIC driver, and cannot make it work (with
> DPDK 20.02).
> 
> I modified the firmware "LINK" line to use my Mellanox ConnectX 5
> (0000:11:00.0). No other changes (also tried in KVM with a virtio-pci device,
> no more luck).
> 
> According to how I launch testpmd, either I receive nothing, or I segfault.
> 
> Note that DPDK is able to use 2 ports on my system (the two ConnectX 5
> ports).
> 
> 1) It seems SoftNIC does not work with whitelist
> ---
> sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -w 11:00.0 --vdev
> 'net_softnic0,firmware=./drivers/net/softnic/firmware.cli,cpu_id=0,conn_po
> rt=8086'
> -- -i --forward-mode=softnic --portmask=0x1


[Jasvinder] -  Please don’t use softnic mode in the above command. Simple testpmd fwd mode work for softnic now. I will remove this redundant code (softnicfwd.c) from testpmd.

Also, we use service core to run the softnic, therefore the command could like as below;

sudo ./x86_64-native-linux-gcc/app/testpmd -c 0x7  s 0x4 -n 4 --vdev 'net_softnic0,firmware=app/test-pmd/firmware.cli,cpu_id=0' -- -i

the portmask parameter should be specified for softnic port so that testpmd app can perform loopback at the softnic level. 

Example for softnic firmware with single port will look like below;

link LINK0 dev 0000:18:00.0
pipeline PIPELINE0 period 10 offset_port_id 0
pipeline PIPELINE0 port in bsz 32 link LINK0 rxq 0
pipeline PIPELINE0 port out bsz 32 link LINK0 txq 0
pipeline PIPELINE0 table match stub
pipeline PIPELINE0 port in 0 table 0
thread 2 pipeline PIPELINE0 enable
pipeline PIPELINE0 table 0 rule add match default action fwd port 0




> EAL: Detected 16 lcore(s)
> EAL: Detected 1 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: PCI device 0000:11:00.0 on NUMA socket 0
> EAL:   probe driver: 15b3:1017 net_mlx5
> Interactive-mode selected
> Set softnic packet forwarding mode
> previous number of forwarding ports 2 - changed to number of configured
> ports 1
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
> size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0
> (socket 0) Port 0: B8:83:03:6F:43:40 Configuring Port 1 (socket 0) ; SPDX-
> License-Identifier: BSD-3-Clause ; Copyright(c) 2018 Intel Corporation
> 
> link LINK dev 0000:11:00.0
> Command "link" failed.
> [...]
> ---
> 
> 2) If I don't whitelist, I assume the softnic port will be the third. So I have to
> use the mask 0x4, right?

[Jasvinder]   Yes, softnic port comes last in the list.  To avoid confusion, better to bind only that number of ports to the dpdk which are needed.


> Everything seems right, but start/stop shows I received no packet, while a
> ping is definitely going on with a back-to-back cable (and when the DPDK app
> is killed, I can see counters raising). Did I miss something?

[Jasvinder] - Above suggestion should help here .

> testpmd> stop
> Telling cores to stop...
> Waiting for lcores to finish...
> 
>    ---------------------- Forward statistics for port 2
> ----------------------
>    RX-packets: 0              RX-dropped: 0             RX-total: 0
>    TX-packets: 0              TX-dropped: 0             TX-total: 0
> 
> ----------------------------------------------------------------------------
> 
>    +++++++++++++++ Accumulated forward statistics for all
> ports+++++++++++++++
>    RX-packets: 0              RX-dropped: 0             RX-total: 0
>    TX-packets: 0              TX-dropped: 0             TX-total: 0
> 
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> ++++++++++++
> 
> Done.
> 
> ---
> 
> 3) With portmask 0x1 or 0x2, it segfaults:
> ---
> sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 --vdev
> 'net_softnic0,firmware=./drivers/net/softnic/firmware.cli,cpu_id=0,conn_po
> rt=8086'
> -- -i --forward-mode=softnic --portmask=0x1
> EAL: Detected 16 lcore(s)
> EAL: Detected 1 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: PCI device 0000:00:04.0 on NUMA socket 0
> EAL:   probe driver: 8086:2021 rawdev_ioat
> EAL: PCI device 0000:00:04.1 on NUMA socket 0
> EAL:   probe driver: 8086:2021 rawdev_ioat
> EAL: PCI device 0000:00:04.2 on NUMA socket 0
> EAL:   probe driver: 8086:2021 rawdev_ioat
> EAL: PCI device 0000:00:04.3 on NUMA socket 0
> EAL:   probe driver: 8086:2021 rawdev_ioat
> EAL: PCI device 0000:00:04.4 on NUMA socket 0
> EAL:   probe driver: 8086:2021 rawdev_ioat
> EAL: PCI device 0000:00:04.5 on NUMA socket 0
> EAL:   probe driver: 8086:2021 rawdev_ioat
> EAL: PCI device 0000:00:04.6 on NUMA socket 0
> EAL:   probe driver: 8086:2021 rawdev_ioat
> EAL: PCI device 0000:00:04.7 on NUMA socket 0
> EAL:   probe driver: 8086:2021 rawdev_ioat
> EAL: PCI device 0000:11:00.0 on NUMA socket 0
> EAL:   probe driver: 15b3:1017 net_mlx5
> EAL: PCI device 0000:11:00.1 on NUMA socket 0
> EAL:   probe driver: 15b3:1017 net_mlx5
> Interactive-mode selected
> Set softnic packet forwarding mode
> previous number of forwarding ports 3 - changed to number of configured
> ports 1
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
> size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0
> (socket 0) Port 0: B8:83:03:6F:43:40 Configuring Port 1 (socket 0) Port 1:
> B8:83:03:6F:43:41 Configuring Port 2 (socket 0) ; SPDX-License-Identifier: BSD-
> 3-Clause ; Copyright(c) 2018 Intel Corporation
> 
> link LINK dev 0000:11:00.0
> 
> pipeline RX period 10 offset_port_id 0
> pipeline RX port in bsz 32 link LINK rxq 0 pipeline RX port out bsz 32 swq
> RXQ0 pipeline RX table match stub pipeline RX port in 0 table 0 pipeline RX
> table 0 rule add match default action fwd port 0
> 
> pipeline TX period 10 offset_port_id 0
> pipeline TX port in bsz 32 swq TXQ0
> pipeline TX port out bsz 32 link LINK txq 0 pipeline TX table match stub
> pipeline TX port in 0 table 0 pipeline TX table 0 rule add match default action
> fwd port 0
> 
> thread 1 pipeline RX enable
> Command "thread pipeline enable" failed.
> thread 1 pipeline TX enable
> Command "thread pipeline enable" failed.
> Port 2: 00:00:00:00:00:00
> Checking link statuses...
> Done
> testpmd> start
> softnic packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support
> enabled, MP allocation mode: native Logical Core 1 (socket 0) forwards
> packets on 1 streams:
>    RX P=2/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0) peer=02:00:00:00:00:02
> 
>    softnic packet forwarding packets/burst=32
>    nb forwarding cores=1 - nb forwarding ports=1
>    port 0: RX queue number: 1 Tx queue number: 1
>      Rx offloads=0x0 Tx offloads=0x0
>      RX queue: 0
>        RX desc=256 - RX free threshold=0
>        RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>        RX Offloads=0x0
>      TX queue: 0
>        TX desc=256 - TX free threshold=0
>        TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>        TX offloads=0x0 - TX RS bit threshold=0
>    port 1: RX queue number: 1 Tx queue number: 1
>      Rx offloads=0x0 Tx offloads=0x0
>      RX queue: 0
>        RX desc=256 - RX free threshold=0
>        RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>        RX Offloads=0x0
>      TX queue: 0
>        TX desc=256 - TX free threshold=0
>        TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>        TX offloads=0x0 - TX RS bit threshold=0
>    port 2: RX queue number: 1 Tx queue number: 1
>      Rx offloads=0x0 Tx offloads=0x0
>      RX queue: 0
>        RX desc=0 - RX free threshold=0
>        RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>        RX Offloads=0x0
>      TX queue: 0
>        TX desc=0 - TX free threshold=0
>        TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>        TX offloads=0x0 - TX RS bit threshold=0
> zsh: segmentation fault  sudo ./x86_64-native-linuxapp-gcc/app/testpmd
> -c 0x3 --vdev  -- -i
> ---


[Jasvinder] - Please use above command to run softnic with service core, should work.


> 4) Also, the telnet command shows me no softnic> prompt, and does not
> seem to react to anything, except when I quit DPDK the telnet dies (quite
> expected):
> telnet 127.0.0.1 8086
> Trying 127.0.0.1...
> Connected to 127.0.0.1.
> Escape character is '^]'.
> [I can type anything here without reactions]


[Jasvinder] - You need to modify testpmd source code to use telnet. Softnic allows configuration through telnet, please look at rte_pmd_softnic_manage() api in softnic/rte_eth_softnic.c.  
 
> 
> 5) Am I correct to assume that SoftNIC can emulate RSS? I'm looking to
> implement RSS-based functional tests for a project.

[Jasvinder] - To emulate RSS in softnic, you need to build pipeline block with classification table. 

> Thanks!
> 
> Tom
> 


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] SoftNIC usage / segfault?
  2020-04-29 16:21 ` Singh, Jasvinder
@ 2020-04-30  8:30   ` Tom Barbette
  2020-04-30  9:51     ` Singh, Jasvinder
  0 siblings, 1 reply; 4+ messages in thread
From: Tom Barbette @ 2020-04-30  8:30 UTC (permalink / raw)
  To: Singh, Jasvinder; +Cc: Dumitrescu, Cristian, users

Le 29/04/2020 à 18:21, Singh, Jasvinder a écrit :
> 
> 
>> -----Original Message-----
>> From: Tom Barbette <barbette@kth.se>
>> Sent: Wednesday, April 29, 2020 3:27 PM
>> Cc: Singh, Jasvinder <jasvinder.singh@intel.com>; Dumitrescu, Cristian
>> <cristian.dumitrescu@intel.com>; users@dpdk.org
>> Subject: SoftNIC usage / segfault?
>>
>> Hi all,
>>
>> I'm a little bit puzzled by the SoftNIC driver, and cannot make it work (with
>> DPDK 20.02).
>>
>> I modified the firmware "LINK" line to use my Mellanox ConnectX 5
>> (0000:11:00.0). No other changes (also tried in KVM with a virtio-pci device,
>> no more luck).
>>
>> According to how I launch testpmd, either I receive nothing, or I segfault.
>>
>> Note that DPDK is able to use 2 ports on my system (the two ConnectX 5
>> ports).
>>
>> 1) It seems SoftNIC does not work with whitelist
>> ---
>> sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -w 11:00.0 --vdev
>> 'net_softnic0,firmware=./drivers/net/softnic/firmware.cli,cpu_id=0,conn_po
>> rt=8086'
>> -- -i --forward-mode=softnic --portmask=0x1
> 
> 
> [Jasvinder] -  Please don’t use softnic mode in the above command. Simple testpmd fwd mode work for softnic now. I will remove this redundant code (softnicfwd.c) from testpmd.
Ok thanks! FYI, I followed the guide at 
https://doc.dpdk.org/guides/nics/softnic.html that should be updated too 
then.
> 
> Also, we use service core to run the softnic, therefore the command could like as below;
> 
> sudo ./x86_64-native-linux-gcc/app/testpmd -c 0x7  s 0x4 -n 4 --vdev 'net_softnic0,firmware=app/test-pmd/firmware.cli,cpu_id=0' -- -i
> 
> the portmask parameter should be specified for softnic port so that testpmd app can perform loopback at the softnic level.
> 
> Example for softnic firmware with single port will look like below;
> 
> link LINK0 dev 0000:18:00.0
> pipeline PIPELINE0 period 10 offset_port_id 0
> pipeline PIPELINE0 port in bsz 32 link LINK0 rxq 0
> pipeline PIPELINE0 port out bsz 32 link LINK0 txq 0
> pipeline PIPELINE0 table match stub
> pipeline PIPELINE0 port in 0 table 0
> thread 2 pipeline PIPELINE0 enable
> pipeline PIPELINE0 table 0 rule add match default action fwd port 0
> 
SoftNIC still seams to not get the packet. If I remove the portmask (so 
I get 3 NICs), then I can see the "original" NIC still get the packets 
and the SoftNIC driver gets nothing.

> 
> 
> 
>> EAL: Detected 16 lcore(s)
>> EAL: Detected 1 NUMA nodes
>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> EAL: Selected IOVA mode 'PA'
>> EAL: Probing VFIO support...
>> EAL: VFIO support initialized
>> EAL: PCI device 0000:11:00.0 on NUMA socket 0
>> EAL:   probe driver: 15b3:1017 net_mlx5
>> Interactive-mode selected
>> Set softnic packet forwarding mode
>> previous number of forwarding ports 2 - changed to number of configured
>> ports 1
>> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
>> size=2176, socket=0
>> testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0
>> (socket 0) Port 0: B8:83:03:6F:43:40 Configuring Port 1 (socket 0) ; SPDX-
>> License-Identifier: BSD-3-Clause ; Copyright(c) 2018 Intel Corporation
>>
>> link LINK dev 0000:11:00.0
>> Command "link" failed.
>> [...]
>> ---
>>
>> 2) If I don't whitelist, I assume the softnic port will be the third. So I have to
>> use the mask 0x4, right?
> 
> [Jasvinder]   Yes, softnic port comes last in the list.  To avoid confusion, better to bind only that number of ports to the dpdk which are needed.
MLX5 does not need to be bound. So that's why I tried to whitelist 
because they will all appear.
> 
> 
>> Everything seems right, but start/stop shows I received no packet, while a
>> ping is definitely going on with a back-to-back cable (and when the DPDK app
>> is killed, I can see counters raising). Did I miss something?
> 
> [Jasvinder] - Above suggestion should help here .
> 
>> testpmd> stop
>> Telling cores to stop...
>> Waiting for lcores to finish...
>>
>>     ---------------------- Forward statistics for port 2
>> ----------------------
>>     RX-packets: 0              RX-dropped: 0             RX-total: 0
>>     TX-packets: 0              TX-dropped: 0             TX-total: 0
>>
>> ----------------------------------------------------------------------------
>>
>>     +++++++++++++++ Accumulated forward statistics for all
>> ports+++++++++++++++
>>     RX-packets: 0              RX-dropped: 0             RX-total: 0
>>     TX-packets: 0              TX-dropped: 0             TX-total: 0
>>
>> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
>> ++++++++++++
>>
>> Done.
>>
>> ---
>>
>> 3) With portmask 0x1 or 0x2, it segfaults:
>> ---
>> sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 --vdev
>> 'net_softnic0,firmware=./drivers/net/softnic/firmware.cli,cpu_id=0,conn_po
>> rt=8086'
>> -- -i --forward-mode=softnic --portmask=0x1
>> EAL: Detected 16 lcore(s)
>> EAL: Detected 1 NUMA nodes
>> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
>> EAL: Selected IOVA mode 'PA'
>> EAL: Probing VFIO support...
>> EAL: VFIO support initialized
>> EAL: PCI device 0000:00:04.0 on NUMA socket 0
>> EAL:   probe driver: 8086:2021 rawdev_ioat
>> EAL: PCI device 0000:00:04.1 on NUMA socket 0
>> EAL:   probe driver: 8086:2021 rawdev_ioat
>> EAL: PCI device 0000:00:04.2 on NUMA socket 0
>> EAL:   probe driver: 8086:2021 rawdev_ioat
>> EAL: PCI device 0000:00:04.3 on NUMA socket 0
>> EAL:   probe driver: 8086:2021 rawdev_ioat
>> EAL: PCI device 0000:00:04.4 on NUMA socket 0
>> EAL:   probe driver: 8086:2021 rawdev_ioat
>> EAL: PCI device 0000:00:04.5 on NUMA socket 0
>> EAL:   probe driver: 8086:2021 rawdev_ioat
>> EAL: PCI device 0000:00:04.6 on NUMA socket 0
>> EAL:   probe driver: 8086:2021 rawdev_ioat
>> EAL: PCI device 0000:00:04.7 on NUMA socket 0
>> EAL:   probe driver: 8086:2021 rawdev_ioat
>> EAL: PCI device 0000:11:00.0 on NUMA socket 0
>> EAL:   probe driver: 15b3:1017 net_mlx5
>> EAL: PCI device 0000:11:00.1 on NUMA socket 0
>> EAL:   probe driver: 15b3:1017 net_mlx5
>> Interactive-mode selected
>> Set softnic packet forwarding mode
>> previous number of forwarding ports 3 - changed to number of configured
>> ports 1
>> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
>> size=2176, socket=0
>> testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0
>> (socket 0) Port 0: B8:83:03:6F:43:40 Configuring Port 1 (socket 0) Port 1:
>> B8:83:03:6F:43:41 Configuring Port 2 (socket 0) ; SPDX-License-Identifier: BSD-
>> 3-Clause ; Copyright(c) 2018 Intel Corporation
>>
>> link LINK dev 0000:11:00.0
>>
>> pipeline RX period 10 offset_port_id 0
>> pipeline RX port in bsz 32 link LINK rxq 0 pipeline RX port out bsz 32 swq
>> RXQ0 pipeline RX table match stub pipeline RX port in 0 table 0 pipeline RX
>> table 0 rule add match default action fwd port 0
>>
>> pipeline TX period 10 offset_port_id 0
>> pipeline TX port in bsz 32 swq TXQ0
>> pipeline TX port out bsz 32 link LINK txq 0 pipeline TX table match stub
>> pipeline TX port in 0 table 0 pipeline TX table 0 rule add match default action
>> fwd port 0
>>
>> thread 1 pipeline RX enable
>> Command "thread pipeline enable" failed.
>> thread 1 pipeline TX enable
>> Command "thread pipeline enable" failed.
>> Port 2: 00:00:00:00:00:00
>> Checking link statuses...
>> Done
>> testpmd> start
>> softnic packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support
>> enabled, MP allocation mode: native Logical Core 1 (socket 0) forwards
>> packets on 1 streams:
>>     RX P=2/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0) peer=02:00:00:00:00:02
>>
>>     softnic packet forwarding packets/burst=32
>>     nb forwarding cores=1 - nb forwarding ports=1
>>     port 0: RX queue number: 1 Tx queue number: 1
>>       Rx offloads=0x0 Tx offloads=0x0
>>       RX queue: 0
>>         RX desc=256 - RX free threshold=0
>>         RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>>         RX Offloads=0x0
>>       TX queue: 0
>>         TX desc=256 - TX free threshold=0
>>         TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>>         TX offloads=0x0 - TX RS bit threshold=0
>>     port 1: RX queue number: 1 Tx queue number: 1
>>       Rx offloads=0x0 Tx offloads=0x0
>>       RX queue: 0
>>         RX desc=256 - RX free threshold=0
>>         RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>>         RX Offloads=0x0
>>       TX queue: 0
>>         TX desc=256 - TX free threshold=0
>>         TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>>         TX offloads=0x0 - TX RS bit threshold=0
>>     port 2: RX queue number: 1 Tx queue number: 1
>>       Rx offloads=0x0 Tx offloads=0x0
>>       RX queue: 0
>>         RX desc=0 - RX free threshold=0
>>         RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>>         RX Offloads=0x0
>>       TX queue: 0
>>         TX desc=0 - TX free threshold=0
>>         TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>>         TX offloads=0x0 - TX RS bit threshold=0
>> zsh: segmentation fault  sudo ./x86_64-native-linuxapp-gcc/app/testpmd
>> -c 0x3 --vdev  -- -i
>> ---
> 
> 
> [Jasvinder] - Please use above command to run softnic with service core, should work.
> 
> 
>> 4) Also, the telnet command shows me no softnic> prompt, and does not
>> seem to react to anything, except when I quit DPDK the telnet dies (quite
>> expected):
>> telnet 127.0.0.1 8086
>> Trying 127.0.0.1...
>> Connected to 127.0.0.1.
>> Escape character is '^]'.
>> [I can type anything here without reactions]
> 
> 
> [Jasvinder] - You need to modify testpmd source code to use telnet. Softnic allows configuration through telnet, please look at rte_pmd_softnic_manage() api in softnic/rte_eth_softnic.c.
Not clear from the guide too.
>   
>>
>> 5) Am I correct to assume that SoftNIC can emulate RSS? I'm looking to
>> implement RSS-based functional tests for a project.
> 
> [Jasvinder] - To emulate RSS in softnic, you need to build pipeline block with classification table.
Great!
> 
>> Thanks!
>>
>> Tom
>>
> 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [dpdk-users] SoftNIC usage / segfault?
  2020-04-30  8:30   ` Tom Barbette
@ 2020-04-30  9:51     ` Singh, Jasvinder
  0 siblings, 0 replies; 4+ messages in thread
From: Singh, Jasvinder @ 2020-04-30  9:51 UTC (permalink / raw)
  To: Tom Barbette; +Cc: Dumitrescu, Cristian, users



> -----Original Message-----
> From: Tom Barbette <barbette@kth.se>
> Sent: Thursday, April 30, 2020 9:31 AM
> To: Singh, Jasvinder <jasvinder.singh@intel.com>
> Cc: Dumitrescu, Cristian <cristian.dumitrescu@intel.com>; users@dpdk.org
> Subject: Re: SoftNIC usage / segfault?
> 
> Le 29/04/2020 à 18:21, Singh, Jasvinder a écrit :
> >
> >
> >> -----Original Message-----
> >> From: Tom Barbette <barbette@kth.se>
> >> Sent: Wednesday, April 29, 2020 3:27 PM
> >> Cc: Singh, Jasvinder <jasvinder.singh@intel.com>; Dumitrescu,
> >> Cristian <cristian.dumitrescu@intel.com>; users@dpdk.org
> >> Subject: SoftNIC usage / segfault?
> >>
> >> Hi all,
> >>
> >> I'm a little bit puzzled by the SoftNIC driver, and cannot make it
> >> work (with DPDK 20.02).
> >>
> >> I modified the firmware "LINK" line to use my Mellanox ConnectX 5
> >> (0000:11:00.0). No other changes (also tried in KVM with a virtio-pci
> >> device, no more luck).
> >>
> >> According to how I launch testpmd, either I receive nothing, or I segfault.
> >>
> >> Note that DPDK is able to use 2 ports on my system (the two ConnectX
> >> 5 ports).
> >>
> >> 1) It seems SoftNIC does not work with whitelist
> >> ---
> >> sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -w 11:00.0
> >> --vdev
> >> 'net_softnic0,firmware=./drivers/net/softnic/firmware.cli,cpu_id=0,co
> >> nn_po
> >> rt=8086'
> >> -- -i --forward-mode=softnic --portmask=0x1
> >
> >
> > [Jasvinder] -  Please don’t use softnic mode in the above command. Simple
> testpmd fwd mode work for softnic now. I will remove this redundant code
> (softnicfwd.c) from testpmd.
> Ok thanks! FYI, I followed the guide at
> https://doc.dpdk.org/guides/nics/softnic.html that should be updated too
> then.
> >
> > Also, we use service core to run the softnic, therefore the command
> > could like as below;
> >
> > sudo ./x86_64-native-linux-gcc/app/testpmd -c 0x7  s 0x4 -n 4 --vdev
> > 'net_softnic0,firmware=app/test-pmd/firmware.cli,cpu_id=0' -- -i
> >
> > the portmask parameter should be specified for softnic port so that
> testpmd app can perform loopback at the softnic level.
> >
> > Example for softnic firmware with single port will look like below;
> >
> > link LINK0 dev 0000:18:00.0
> > pipeline PIPELINE0 period 10 offset_port_id 0 pipeline PIPELINE0 port
> > in bsz 32 link LINK0 rxq 0 pipeline PIPELINE0 port out bsz 32 link
> > LINK0 txq 0 pipeline PIPELINE0 table match stub pipeline PIPELINE0
> > port in 0 table 0 thread 2 pipeline PIPELINE0 enable pipeline
> > PIPELINE0 table 0 rule add match default action fwd port 0
> >
> SoftNIC still seams to not get the packet. If I remove the portmask (so I get 3
> NICs), then I can see the "original" NIC still get the packets and the SoftNIC
> driver gets nothing.
> 

[Jasvinder] -  Try removing portmask param in the testpmd command,  and use  "set portlist <softnic port id>" before testpmd start. 

> >
> >
> >> EAL: Detected 16 lcore(s)
> >> EAL: Detected 1 NUMA nodes
> >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> >> EAL: Selected IOVA mode 'PA'
> >> EAL: Probing VFIO support...
> >> EAL: VFIO support initialized
> >> EAL: PCI device 0000:11:00.0 on NUMA socket 0
> >> EAL:   probe driver: 15b3:1017 net_mlx5
> >> Interactive-mode selected
> >> Set softnic packet forwarding mode
> >> previous number of forwarding ports 2 - changed to number of
> >> configured ports 1
> >> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
> >> size=2176, socket=0
> >> testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port
> >> 0 (socket 0) Port 0: B8:83:03:6F:43:40 Configuring Port 1 (socket 0)
> >> ; SPDX-
> >> License-Identifier: BSD-3-Clause ; Copyright(c) 2018 Intel
> >> Corporation
> >>
> >> link LINK dev 0000:11:00.0
> >> Command "link" failed.
> >> [...]
> >> ---
> >>
> >> 2) If I don't whitelist, I assume the softnic port will be the third.
> >> So I have to use the mask 0x4, right?
> >
> > [Jasvinder]   Yes, softnic port comes last in the list.  To avoid confusion,
> better to bind only that number of ports to the dpdk which are needed.
> MLX5 does not need to be bound. So that's why I tried to whitelist because
> they will all appear.
> >
> >
> >> Everything seems right, but start/stop shows I received no packet,
> >> while a ping is definitely going on with a back-to-back cable (and
> >> when the DPDK app is killed, I can see counters raising). Did I miss
> something?
> >
> > [Jasvinder] - Above suggestion should help here .
> >
> >> testpmd> stop
> >> Telling cores to stop...
> >> Waiting for lcores to finish...
> >>
> >>     ---------------------- Forward statistics for port 2
> >> ----------------------
> >>     RX-packets: 0              RX-dropped: 0             RX-total: 0
> >>     TX-packets: 0              TX-dropped: 0             TX-total: 0
> >>
> >> ---------------------------------------------------------------------
> >> -------
> >>
> >>     +++++++++++++++ Accumulated forward statistics for all
> >> ports+++++++++++++++
> >>     RX-packets: 0              RX-dropped: 0             RX-total: 0
> >>     TX-packets: 0              TX-dropped: 0             TX-total: 0
> >>
> >>
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> >> ++++++++++++
> >>
> >> Done.
> >>
> >> ---
> >>
> >> 3) With portmask 0x1 or 0x2, it segfaults:
> >> ---
> >> sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 --vdev
> >> 'net_softnic0,firmware=./drivers/net/softnic/firmware.cli,cpu_id=0,co
> >> nn_po
> >> rt=8086'
> >> -- -i --forward-mode=softnic --portmask=0x1
> >> EAL: Detected 16 lcore(s)
> >> EAL: Detected 1 NUMA nodes
> >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> >> EAL: Selected IOVA mode 'PA'
> >> EAL: Probing VFIO support...
> >> EAL: VFIO support initialized
> >> EAL: PCI device 0000:00:04.0 on NUMA socket 0
> >> EAL:   probe driver: 8086:2021 rawdev_ioat
> >> EAL: PCI device 0000:00:04.1 on NUMA socket 0
> >> EAL:   probe driver: 8086:2021 rawdev_ioat
> >> EAL: PCI device 0000:00:04.2 on NUMA socket 0
> >> EAL:   probe driver: 8086:2021 rawdev_ioat
> >> EAL: PCI device 0000:00:04.3 on NUMA socket 0
> >> EAL:   probe driver: 8086:2021 rawdev_ioat
> >> EAL: PCI device 0000:00:04.4 on NUMA socket 0
> >> EAL:   probe driver: 8086:2021 rawdev_ioat
> >> EAL: PCI device 0000:00:04.5 on NUMA socket 0
> >> EAL:   probe driver: 8086:2021 rawdev_ioat
> >> EAL: PCI device 0000:00:04.6 on NUMA socket 0
> >> EAL:   probe driver: 8086:2021 rawdev_ioat
> >> EAL: PCI device 0000:00:04.7 on NUMA socket 0
> >> EAL:   probe driver: 8086:2021 rawdev_ioat
> >> EAL: PCI device 0000:11:00.0 on NUMA socket 0
> >> EAL:   probe driver: 15b3:1017 net_mlx5
> >> EAL: PCI device 0000:11:00.1 on NUMA socket 0
> >> EAL:   probe driver: 15b3:1017 net_mlx5
> >> Interactive-mode selected
> >> Set softnic packet forwarding mode
> >> previous number of forwarding ports 3 - changed to number of
> >> configured ports 1
> >> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
> >> size=2176, socket=0
> >> testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port
> >> 0 (socket 0) Port 0: B8:83:03:6F:43:40 Configuring Port 1 (socket 0) Port 1:
> >> B8:83:03:6F:43:41 Configuring Port 2 (socket 0) ;
> >> SPDX-License-Identifier: BSD- 3-Clause ; Copyright(c) 2018 Intel
> >> Corporation
> >>
> >> link LINK dev 0000:11:00.0
> >>
> >> pipeline RX period 10 offset_port_id 0 pipeline RX port in bsz 32
> >> link LINK rxq 0 pipeline RX port out bsz 32 swq
> >> RXQ0 pipeline RX table match stub pipeline RX port in 0 table 0
> >> pipeline RX table 0 rule add match default action fwd port 0
> >>
> >> pipeline TX period 10 offset_port_id 0 pipeline TX port in bsz 32 swq
> >> TXQ0 pipeline TX port out bsz 32 link LINK txq 0 pipeline TX table
> >> match stub pipeline TX port in 0 table 0 pipeline TX table 0 rule add
> >> match default action fwd port 0
> >>
> >> thread 1 pipeline RX enable
> >> Command "thread pipeline enable" failed.
> >> thread 1 pipeline TX enable
> >> Command "thread pipeline enable" failed.
> >> Port 2: 00:00:00:00:00:00
> >> Checking link statuses...
> >> Done
> >> testpmd> start
> >> softnic packet forwarding - ports=1 - cores=1 - streams=1 - NUMA
> >> support enabled, MP allocation mode: native Logical Core 1 (socket 0)
> >> forwards packets on 1 streams:
> >>     RX P=2/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0)
> >> peer=02:00:00:00:00:02
> >>
> >>     softnic packet forwarding packets/burst=32
> >>     nb forwarding cores=1 - nb forwarding ports=1
> >>     port 0: RX queue number: 1 Tx queue number: 1
> >>       Rx offloads=0x0 Tx offloads=0x0
> >>       RX queue: 0
> >>         RX desc=256 - RX free threshold=0
> >>         RX threshold registers: pthresh=0 hthresh=0  wthresh=0
> >>         RX Offloads=0x0
> >>       TX queue: 0
> >>         TX desc=256 - TX free threshold=0
> >>         TX threshold registers: pthresh=0 hthresh=0  wthresh=0
> >>         TX offloads=0x0 - TX RS bit threshold=0
> >>     port 1: RX queue number: 1 Tx queue number: 1
> >>       Rx offloads=0x0 Tx offloads=0x0
> >>       RX queue: 0
> >>         RX desc=256 - RX free threshold=0
> >>         RX threshold registers: pthresh=0 hthresh=0  wthresh=0
> >>         RX Offloads=0x0
> >>       TX queue: 0
> >>         TX desc=256 - TX free threshold=0
> >>         TX threshold registers: pthresh=0 hthresh=0  wthresh=0
> >>         TX offloads=0x0 - TX RS bit threshold=0
> >>     port 2: RX queue number: 1 Tx queue number: 1
> >>       Rx offloads=0x0 Tx offloads=0x0
> >>       RX queue: 0
> >>         RX desc=0 - RX free threshold=0
> >>         RX threshold registers: pthresh=0 hthresh=0  wthresh=0
> >>         RX Offloads=0x0
> >>       TX queue: 0
> >>         TX desc=0 - TX free threshold=0
> >>         TX threshold registers: pthresh=0 hthresh=0  wthresh=0
> >>         TX offloads=0x0 - TX RS bit threshold=0
> >> zsh: segmentation fault  sudo
> >> ./x86_64-native-linuxapp-gcc/app/testpmd
> >> -c 0x3 --vdev  -- -i
> >> ---
> >
> >
> > [Jasvinder] - Please use above command to run softnic with service core,
> should work.
> >
> >
> >> 4) Also, the telnet command shows me no softnic> prompt, and does not
> >> seem to react to anything, except when I quit DPDK the telnet dies
> >> (quite
> >> expected):
> >> telnet 127.0.0.1 8086
> >> Trying 127.0.0.1...
> >> Connected to 127.0.0.1.
> >> Escape character is '^]'.
> >> [I can type anything here without reactions]
> >
> >
> > [Jasvinder] - You need to modify testpmd source code to use telnet. Softnic
> allows configuration through telnet, please look at
> rte_pmd_softnic_manage() api in softnic/rte_eth_softnic.c.
> Not clear from the guide too.
> >
> >>
> >> 5) Am I correct to assume that SoftNIC can emulate RSS? I'm looking
> >> to implement RSS-based functional tests for a project.
> >
> > [Jasvinder] - To emulate RSS in softnic, you need to build pipeline block
> with classification table.
> Great!
> >
> >> Thanks!
> >>
> >> Tom
> >>
> >

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, back to index

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-04-29 14:27 [dpdk-users] SoftNIC usage / segfault? Tom Barbette
2020-04-29 16:21 ` Singh, Jasvinder
2020-04-30  8:30   ` Tom Barbette
2020-04-30  9:51     ` Singh, Jasvinder

DPDK usage discussions

Archives are clonable:
	git clone --mirror http://inbox.dpdk.org/users/0 users/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 users users/ http://inbox.dpdk.org/users \
		users@dpdk.org
	public-inbox-index users


Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.users


AGPL code for this site: git clone https://public-inbox.org/ public-inbox