DPDK usage discussions
 help / color / mirror / Atom feed
From: "Singh, Jasvinder" <jasvinder.singh@intel.com>
To: Tom Barbette <barbette@kth.se>
Cc: "Dumitrescu, Cristian" <cristian.dumitrescu@intel.com>,
	"users@dpdk.org" <users@dpdk.org>
Subject: Re: [dpdk-users] SoftNIC usage / segfault?
Date: Wed, 29 Apr 2020 16:21:16 +0000	[thread overview]
Message-ID: <DM5PR11MB00747BAD858AEC07B47BC829E0AD0@DM5PR11MB0074.namprd11.prod.outlook.com> (raw)
In-Reply-To: <616b8561-52b1-c781-0d24-d717d8387f24@kth.se>



> -----Original Message-----
> From: Tom Barbette <barbette@kth.se>
> Sent: Wednesday, April 29, 2020 3:27 PM
> Cc: Singh, Jasvinder <jasvinder.singh@intel.com>; Dumitrescu, Cristian
> <cristian.dumitrescu@intel.com>; users@dpdk.org
> Subject: SoftNIC usage / segfault?
> 
> Hi all,
> 
> I'm a little bit puzzled by the SoftNIC driver, and cannot make it work (with
> DPDK 20.02).
> 
> I modified the firmware "LINK" line to use my Mellanox ConnectX 5
> (0000:11:00.0). No other changes (also tried in KVM with a virtio-pci device,
> no more luck).
> 
> According to how I launch testpmd, either I receive nothing, or I segfault.
> 
> Note that DPDK is able to use 2 ports on my system (the two ConnectX 5
> ports).
> 
> 1) It seems SoftNIC does not work with whitelist
> ---
> sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -w 11:00.0 --vdev
> 'net_softnic0,firmware=./drivers/net/softnic/firmware.cli,cpu_id=0,conn_po
> rt=8086'
> -- -i --forward-mode=softnic --portmask=0x1


[Jasvinder] -  Please don’t use softnic mode in the above command. Simple testpmd fwd mode work for softnic now. I will remove this redundant code (softnicfwd.c) from testpmd.

Also, we use service core to run the softnic, therefore the command could like as below;

sudo ./x86_64-native-linux-gcc/app/testpmd -c 0x7  s 0x4 -n 4 --vdev 'net_softnic0,firmware=app/test-pmd/firmware.cli,cpu_id=0' -- -i

the portmask parameter should be specified for softnic port so that testpmd app can perform loopback at the softnic level. 

Example for softnic firmware with single port will look like below;

link LINK0 dev 0000:18:00.0
pipeline PIPELINE0 period 10 offset_port_id 0
pipeline PIPELINE0 port in bsz 32 link LINK0 rxq 0
pipeline PIPELINE0 port out bsz 32 link LINK0 txq 0
pipeline PIPELINE0 table match stub
pipeline PIPELINE0 port in 0 table 0
thread 2 pipeline PIPELINE0 enable
pipeline PIPELINE0 table 0 rule add match default action fwd port 0




> EAL: Detected 16 lcore(s)
> EAL: Detected 1 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: PCI device 0000:11:00.0 on NUMA socket 0
> EAL:   probe driver: 15b3:1017 net_mlx5
> Interactive-mode selected
> Set softnic packet forwarding mode
> previous number of forwarding ports 2 - changed to number of configured
> ports 1
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
> size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0
> (socket 0) Port 0: B8:83:03:6F:43:40 Configuring Port 1 (socket 0) ; SPDX-
> License-Identifier: BSD-3-Clause ; Copyright(c) 2018 Intel Corporation
> 
> link LINK dev 0000:11:00.0
> Command "link" failed.
> [...]
> ---
> 
> 2) If I don't whitelist, I assume the softnic port will be the third. So I have to
> use the mask 0x4, right?

[Jasvinder]   Yes, softnic port comes last in the list.  To avoid confusion, better to bind only that number of ports to the dpdk which are needed.


> Everything seems right, but start/stop shows I received no packet, while a
> ping is definitely going on with a back-to-back cable (and when the DPDK app
> is killed, I can see counters raising). Did I miss something?

[Jasvinder] - Above suggestion should help here .

> testpmd> stop
> Telling cores to stop...
> Waiting for lcores to finish...
> 
>    ---------------------- Forward statistics for port 2
> ----------------------
>    RX-packets: 0              RX-dropped: 0             RX-total: 0
>    TX-packets: 0              TX-dropped: 0             TX-total: 0
> 
> ----------------------------------------------------------------------------
> 
>    +++++++++++++++ Accumulated forward statistics for all
> ports+++++++++++++++
>    RX-packets: 0              RX-dropped: 0             RX-total: 0
>    TX-packets: 0              TX-dropped: 0             TX-total: 0
> 
> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
> ++++++++++++
> 
> Done.
> 
> ---
> 
> 3) With portmask 0x1 or 0x2, it segfaults:
> ---
> sudo ./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 --vdev
> 'net_softnic0,firmware=./drivers/net/softnic/firmware.cli,cpu_id=0,conn_po
> rt=8086'
> -- -i --forward-mode=softnic --portmask=0x1
> EAL: Detected 16 lcore(s)
> EAL: Detected 1 NUMA nodes
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: Probing VFIO support...
> EAL: VFIO support initialized
> EAL: PCI device 0000:00:04.0 on NUMA socket 0
> EAL:   probe driver: 8086:2021 rawdev_ioat
> EAL: PCI device 0000:00:04.1 on NUMA socket 0
> EAL:   probe driver: 8086:2021 rawdev_ioat
> EAL: PCI device 0000:00:04.2 on NUMA socket 0
> EAL:   probe driver: 8086:2021 rawdev_ioat
> EAL: PCI device 0000:00:04.3 on NUMA socket 0
> EAL:   probe driver: 8086:2021 rawdev_ioat
> EAL: PCI device 0000:00:04.4 on NUMA socket 0
> EAL:   probe driver: 8086:2021 rawdev_ioat
> EAL: PCI device 0000:00:04.5 on NUMA socket 0
> EAL:   probe driver: 8086:2021 rawdev_ioat
> EAL: PCI device 0000:00:04.6 on NUMA socket 0
> EAL:   probe driver: 8086:2021 rawdev_ioat
> EAL: PCI device 0000:00:04.7 on NUMA socket 0
> EAL:   probe driver: 8086:2021 rawdev_ioat
> EAL: PCI device 0000:11:00.0 on NUMA socket 0
> EAL:   probe driver: 15b3:1017 net_mlx5
> EAL: PCI device 0000:11:00.1 on NUMA socket 0
> EAL:   probe driver: 15b3:1017 net_mlx5
> Interactive-mode selected
> Set softnic packet forwarding mode
> previous number of forwarding ports 3 - changed to number of configured
> ports 1
> testpmd: create a new mbuf pool <mbuf_pool_socket_0>: n=155456,
> size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc Configuring Port 0
> (socket 0) Port 0: B8:83:03:6F:43:40 Configuring Port 1 (socket 0) Port 1:
> B8:83:03:6F:43:41 Configuring Port 2 (socket 0) ; SPDX-License-Identifier: BSD-
> 3-Clause ; Copyright(c) 2018 Intel Corporation
> 
> link LINK dev 0000:11:00.0
> 
> pipeline RX period 10 offset_port_id 0
> pipeline RX port in bsz 32 link LINK rxq 0 pipeline RX port out bsz 32 swq
> RXQ0 pipeline RX table match stub pipeline RX port in 0 table 0 pipeline RX
> table 0 rule add match default action fwd port 0
> 
> pipeline TX period 10 offset_port_id 0
> pipeline TX port in bsz 32 swq TXQ0
> pipeline TX port out bsz 32 link LINK txq 0 pipeline TX table match stub
> pipeline TX port in 0 table 0 pipeline TX table 0 rule add match default action
> fwd port 0
> 
> thread 1 pipeline RX enable
> Command "thread pipeline enable" failed.
> thread 1 pipeline TX enable
> Command "thread pipeline enable" failed.
> Port 2: 00:00:00:00:00:00
> Checking link statuses...
> Done
> testpmd> start
> softnic packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support
> enabled, MP allocation mode: native Logical Core 1 (socket 0) forwards
> packets on 1 streams:
>    RX P=2/Q=0 (socket 0) -> TX P=2/Q=0 (socket 0) peer=02:00:00:00:00:02
> 
>    softnic packet forwarding packets/burst=32
>    nb forwarding cores=1 - nb forwarding ports=1
>    port 0: RX queue number: 1 Tx queue number: 1
>      Rx offloads=0x0 Tx offloads=0x0
>      RX queue: 0
>        RX desc=256 - RX free threshold=0
>        RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>        RX Offloads=0x0
>      TX queue: 0
>        TX desc=256 - TX free threshold=0
>        TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>        TX offloads=0x0 - TX RS bit threshold=0
>    port 1: RX queue number: 1 Tx queue number: 1
>      Rx offloads=0x0 Tx offloads=0x0
>      RX queue: 0
>        RX desc=256 - RX free threshold=0
>        RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>        RX Offloads=0x0
>      TX queue: 0
>        TX desc=256 - TX free threshold=0
>        TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>        TX offloads=0x0 - TX RS bit threshold=0
>    port 2: RX queue number: 1 Tx queue number: 1
>      Rx offloads=0x0 Tx offloads=0x0
>      RX queue: 0
>        RX desc=0 - RX free threshold=0
>        RX threshold registers: pthresh=0 hthresh=0  wthresh=0
>        RX Offloads=0x0
>      TX queue: 0
>        TX desc=0 - TX free threshold=0
>        TX threshold registers: pthresh=0 hthresh=0  wthresh=0
>        TX offloads=0x0 - TX RS bit threshold=0
> zsh: segmentation fault  sudo ./x86_64-native-linuxapp-gcc/app/testpmd
> -c 0x3 --vdev  -- -i
> ---


[Jasvinder] - Please use above command to run softnic with service core, should work.


> 4) Also, the telnet command shows me no softnic> prompt, and does not
> seem to react to anything, except when I quit DPDK the telnet dies (quite
> expected):
> telnet 127.0.0.1 8086
> Trying 127.0.0.1...
> Connected to 127.0.0.1.
> Escape character is '^]'.
> [I can type anything here without reactions]


[Jasvinder] - You need to modify testpmd source code to use telnet. Softnic allows configuration through telnet, please look at rte_pmd_softnic_manage() api in softnic/rte_eth_softnic.c.  
 
> 
> 5) Am I correct to assume that SoftNIC can emulate RSS? I'm looking to
> implement RSS-based functional tests for a project.

[Jasvinder] - To emulate RSS in softnic, you need to build pipeline block with classification table. 

> Thanks!
> 
> Tom
> 


  reply	other threads:[~2020-04-29 16:21 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-04-29 14:27 Tom Barbette
2020-04-29 16:21 ` Singh, Jasvinder [this message]
2020-04-30  8:30   ` Tom Barbette
2020-04-30  9:51     ` Singh, Jasvinder

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=DM5PR11MB00747BAD858AEC07B47BC829E0AD0@DM5PR11MB0074.namprd11.prod.outlook.com \
    --to=jasvinder.singh@intel.com \
    --cc=barbette@kth.se \
    --cc=cristian.dumitrescu@intel.com \
    --cc=users@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).