DPDK patches and discussions
 help / color / mirror / Atom feed
* Softnic test failed + questions
@ 2022-05-03  9:33 Maxime Ramiara
  2022-05-11 13:13 ` Maxime Ramiara
  0 siblings, 1 reply; 2+ messages in thread
From: Maxime Ramiara @ 2022-05-03  9:33 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 3876 bytes --]

Hi all,

I'm a beginner with dpdk. With my team, we developed a dpdk app with the
following pipeline:

NIC RX -> RX Thread -> Worker Thread -> TX Thread -> NIC TX.

Within the RX Thread, we parse some headers. Within the worker thread,
we're using the hierarchical scheduler. To sum up, we want to replace the
HS with the traffic manager.
However, it seems the TM can only be set up on a NIC. This is not what we
want because we're doing some packet processing stuff within the TX Thread.


Thus, we thought about the SoftNIC as a solution for our problem. Would it
be possible to develop a pipeline like this ?

NIC RX -> RX Thread -> SoftNIC with TM -> Worker Thread -> TX Thread -> NIC
TX.

It looks like the "firmware.cli" script and the packet framework offer us
some freedom to make our pipeline.
First and foremost, I tried to test the SoftNIC with the following command
in the doc :

./testpmd -c 0x3 --vdev 'net_softnic0,firmware=<script
path>/firmware.cli,cpu_id=0,conn_port=8086' -- -i
     --forward-mode=softnic --portmask=0x2

Below are my network devices with the command log:

./usertools/dpdk-devbind.py --status

Network devices using DPDK-compatible driver
============================================
0000:19:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=vfio-pci
unused=i40e
0000:19:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=vfio-pci
unused=i40e

Network devices using kernel driver
===================================
0000:01:00.0 'I350 Gigabit Network Connection 1521' if=eno3 drv=igb
unused=vfio-pci *Active*

Other Network devices
=====================
0000:01:00.1 'I350 Gigabit Network Connection 1521' unused=igb,vfio-pci

Then, the command log of test-pmd:

sudo ./dpdk-testpmd --vdev
'net_softnic0,firmware=./firmware.cli,cpu_id=0,conn_port=8087' -- -i
--forward-mode=softnic --portmask=0x2
[sudo] password for user:
EAL: Detected CPU lcores: 32
EAL: Detected NUMA nodes: 2
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: VFIO support initialized
EAL: Using IOMMU type 8 (No-IOMMU)
EAL: Probe PCI driver: net_i40e (8086:1572) device: 0000:19:00.0 (socket 0)
EAL: Probe PCI driver: net_i40e (8086:1572) device: 0000:19:00.1 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Invalid softnic packet forwarding mode
previous number of forwarding ports 3 - changed to number of configured
ports 1
testpmd: create a new mbuf pool <mb_pool_0>: n=395456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=395456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port
will pair with itself.

Configuring Port 0 (socket 0)
Port 0: E4:43:4B:04:D1:4E
Configuring Port 1 (socket 0)
Port 1: E4:43:4B:04:D1:50
Configuring Port 2 (socket 0)
; SPDX-License-Identifier: BSD-3-Clause
; Copyright(c) 2018 Intel Corporation

link LINK0 dev 0000:19:00.0

pipeline RX period 10 offset_port_id 0
pipeline RX port in bsz 32 link LINK0 rxq 0
pipeline RX port out bsz 32 swq RXQ0
pipeline RX table match stub
pipeline RX port in 0 table 0
pipeline RX table 0 rule add match default action fwd port 0

pipeline TX period 10 offset_port_id 0
pipeline TX port in bsz 32 swq TXQ0
pipeline TX port out bsz 32 link LINK0 txq 0
pipeline TX table match stub
pipeline TX port in 0 table 0
pipeline TX table 0 rule add match default action fwd port 0

thread 1 pipeline RX enable
Command "thread pipeline enable" failed.
thread 1 pipeline TX enable
Command "thread pipeline enable" failed.
Port 2: 00:00:00:00:00:00
Checking link statuses...
Done
testpmd>
Port 0: link state change event

Port 1: link state change event


Can anyone please help us on this ?

Regards,

Max.

[-- Attachment #2: Type: text/html, Size: 4363 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* Re: Softnic test failed + questions
  2022-05-03  9:33 Softnic test failed + questions Maxime Ramiara
@ 2022-05-11 13:13 ` Maxime Ramiara
  0 siblings, 0 replies; 2+ messages in thread
From: Maxime Ramiara @ 2022-05-11 13:13 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 5176 bytes --]

EDIT : The tutorial that I followed to test SoftNIC (
https://doc.dpdk.org/guides-19.02/nics/softnic.html) is appropriate for
DPDK 19.02 but not for the newer versions like 19.11.12, the one I use.
We found out something :

" sudo ./dpdk-testpmd --vdev
'net_softnic0,firmware=./firmware.cli,cpu_id=0,conn_port=8086,sc=0' -- -i
--portmask=0x2".  We didn't really understand the impact of "sc=0" by the
way.
First line of firmware.cli file : "link LINK0 dev net_softnic0" instead of
"link LINK0 dev 0000:19:00.0".
Then, we switched the forwarding mode to Input/Output with the CLI. With
all of that, it looked like it worked. We didn't have any error messages
like before.

If someone is comfortable with test pmd / softnic versions, help should be
quite welcome.

Then, can anyone help us on the pipeline we wanna design (first e-mail) ?

Regards,

Max.




Le mar. 3 mai 2022 à 11:33, Maxime Ramiara <max1mo.ram8@gmail.com> a écrit :

> Hi all,
>
> I'm a beginner with dpdk. With my team, we developed a dpdk app with the
> following pipeline:
>
> NIC RX -> RX Thread -> Worker Thread -> TX Thread -> NIC TX.
>
> Within the RX Thread, we parse some headers. Within the worker thread,
> we're using the hierarchical scheduler. To sum up, we want to replace the
> HS with the traffic manager.
> However, it seems the TM can only be set up on a NIC. This is not what we
> want because we're doing some packet processing stuff within the TX Thread.
>
>
> Thus, we thought about the SoftNIC as a solution for our problem. Would it
> be possible to develop a pipeline like this ?
>
> NIC RX -> RX Thread -> SoftNIC with TM -> Worker Thread -> TX Thread ->
> NIC TX.
>
> It looks like the "firmware.cli" script and the packet framework offer us
> some freedom to make our pipeline.
> First and foremost, I tried to test the SoftNIC with the following command
> in the doc :
>
> ./testpmd -c 0x3 --vdev 'net_softnic0,firmware=<script
> path>/firmware.cli,cpu_id=0,conn_port=8086' -- -i
>      --forward-mode=softnic --portmask=0x2
>
> Below are my network devices with the command log:
>
> ./usertools/dpdk-devbind.py --status
>
> Network devices using DPDK-compatible driver
> ============================================
> 0000:19:00.0 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=vfio-pci
> unused=i40e
> 0000:19:00.1 'Ethernet Controller X710 for 10GbE SFP+ 1572' drv=vfio-pci
> unused=i40e
>
> Network devices using kernel driver
> ===================================
> 0000:01:00.0 'I350 Gigabit Network Connection 1521' if=eno3 drv=igb
> unused=vfio-pci *Active*
>
> Other Network devices
> =====================
> 0000:01:00.1 'I350 Gigabit Network Connection 1521' unused=igb,vfio-pci
>
> Then, the command log of test-pmd:
>
> sudo ./dpdk-testpmd --vdev
> 'net_softnic0,firmware=./firmware.cli,cpu_id=0,conn_port=8087' -- -i
> --forward-mode=softnic --portmask=0x2
> [sudo] password for user:
> EAL: Detected CPU lcores: 32
> EAL: Detected NUMA nodes: 2
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'PA'
> EAL: VFIO support initialized
> EAL: Using IOMMU type 8 (No-IOMMU)
> EAL: Probe PCI driver: net_i40e (8086:1572) device: 0000:19:00.0 (socket 0)
> EAL: Probe PCI driver: net_i40e (8086:1572) device: 0000:19:00.1 (socket 0)
> TELEMETRY: No legacy callbacks, legacy socket not created
> Interactive-mode selected
> Invalid softnic packet forwarding mode
> previous number of forwarding ports 3 - changed to number of configured
> ports 1
> testpmd: create a new mbuf pool <mb_pool_0>: n=395456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> testpmd: create a new mbuf pool <mb_pool_1>: n=395456, size=2176, socket=1
> testpmd: preferred mempool ops selected: ring_mp_mc
>
> Warning! port-topology=paired and odd forward ports number, the last port
> will pair with itself.
>
> Configuring Port 0 (socket 0)
> Port 0: E4:43:4B:04:D1:4E
> Configuring Port 1 (socket 0)
> Port 1: E4:43:4B:04:D1:50
> Configuring Port 2 (socket 0)
> ; SPDX-License-Identifier: BSD-3-Clause
> ; Copyright(c) 2018 Intel Corporation
>
> link LINK0 dev 0000:19:00.0
>
> pipeline RX period 10 offset_port_id 0
> pipeline RX port in bsz 32 link LINK0 rxq 0
> pipeline RX port out bsz 32 swq RXQ0
> pipeline RX table match stub
> pipeline RX port in 0 table 0
> pipeline RX table 0 rule add match default action fwd port 0
>
> pipeline TX period 10 offset_port_id 0
> pipeline TX port in bsz 32 swq TXQ0
> pipeline TX port out bsz 32 link LINK0 txq 0
> pipeline TX table match stub
> pipeline TX port in 0 table 0
> pipeline TX table 0 rule add match default action fwd port 0
>
> thread 1 pipeline RX enable
> Command "thread pipeline enable" failed.
> thread 1 pipeline TX enable
> Command "thread pipeline enable" failed.
> Port 2: 00:00:00:00:00:00
> Checking link statuses...
> Done
> testpmd>
> Port 0: link state change event
>
> Port 1: link state change event
>
>
> Can anyone please help us on this ?
>
> Regards,
>
> Max.
>

[-- Attachment #2: Type: text/html, Size: 5899 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2022-05-11 13:13 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-05-03  9:33 Softnic test failed + questions Maxime Ramiara
2022-05-11 13:13 ` Maxime Ramiara

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).