Hi all, I have a machine that works without a flow using DPDK 22.11.4: # dpdk-devbind.py --bind=vfio-pci 0000:15:00.0 # dpdk-devbind.py --bind=vfio-pci 0000:15:00.3 # dpdk-testpmd -a 0000:15:00.0 -a 0000:15:00.3 -- --tx-first The dpdk-testpmd utility works as expected: +++++++++++++++ Accumulated forward statistics for all ports++++++ RX-packets: 29666849 RX-dropped: 0 RX-total: 29666849 TX-packets: 29666849 TX-dropped: 0 TX-total: 29666849 ++++++++++++++++++++++++++++++++++++++++++++++++++++++ I would like to test the same machine with XDP (libbpf 0.7.0 and libxdp 1.2.8) on the same NICs: # reboot # export LIBXDP_OBJECT_PATH=/root/libxdp (*) # ulimit -l unlimited # dpdk-testpmd --vdev net_af_xdp0,iface=enp21s0f0 --vdev net_af_xdp1,iface=enp21s0f3 -- --tx-first In this case dpdk-testpmd doesn't work: +++++++++++++++ Accumulated forward statistics for all ports++++ RX-packets: 0 RX-dropped: 0 RX-total: 0 TX-packets: 64 TX-dropped: 0 TX-total: 64 ++++++++++++++++++++++++++++++++++++++++++++++++++++ The only suspicious part in the output of the dpdk-testpmd utility is: [...] libxdp: XDP flag not supported by libxdp. libbpf: prog 'xdp_dispatcher': BPF program load failed: Invalid argument libbpf: prog 'xdp_dispatcher': -- BEGIN PROG LOAD LOG -- Validating prog0() func#1... btf_vmlinux is malformed Arg#0 type PTR in prog0() is not supported yet. processed 0 insns (limit 1000000) max_states_per_insn 0 total_states 0 peak_states 0 mark_read 0 -- END PROG LOAD LOG -- libbpf: failed to load program 'xdp_dispatcher' libbpf: failed to load object 'xdp-dispatcher.o' libxdp: Failed to load dispatcher: Invalid argument libxdp: Falling back to loading single prog without dispatcher [...] I made a mistake for sure but I haven't understood what yet. Do you have any suggestions for me, please? Thank you very much! Ciao, Alessio (*) This doesn't seem to have any effect I copied all file that I thought relevant in /root/libxdp: Makefile, compat.h, libxdp.c, libxdp.pc, protocol.org, xdp-dispatcher.c, xdp-dispatcher.ll, xsk_def_xdp_prog.c, xsk_def_xdp_prog.ll, xsk_def_xdp_prog_5.3.embed.o, README.org, libxdp.3, libxdp.map, libxdp.pc.template, staticobjs, xdp-dispatcher.c.in, xdp-dispatcher.o, xsk_def_xdp_prog.embed.o, xsk_def_xdp_prog.o, xsk_def_xdp_prog_5.3.ll, bpf_instr.h, libxdp.a, libxdp.mk, libxdp_internal.h, tests, xdp-dispatcher.embed.o, xsk.c, xsk_def_xdp_prog.h, xsk_def_xdp_prog_5.3.c and xsk_def_xdp_prog_5.3.o.
[-- Warning: decoded text below may be mangled, UTF-8 assumed --] [-- Attachment #1: Type: text/plain; charset="gb18030", Size: 2004 bytes --] Hi, I am trying to run the pipeline sample application (39. Pipeline Application ¡ª Data Plane Development Kit 24.03.0-rc3 documentation (dpdk.org)), with the L2fwd example in the examples directory. I modified the ethdev.io and l2fwd.cli scripts like below, but I am not sure if it's the correct way: ------------------------------------- ethdev.io: mirroring slots 4 sessions 64 port in 0 ethdev 0000:02:04.0 rxq 0 bsz 32 port in 1 ethdev 0000:02:05.0 rxq 0 bsz 32 port out 0 ethdev 0000:02:04.0 txq 0 bsz 32 port out 1 ethdev 0000:02:05.0 txq 0 bsz 32 ------------------------------------- l2fwd.cli: pipeline codegen ./l2fwd.spec ./l2fwd.c pipeline libbuild ./l2fwd.c ./l2fwd.so mempool MEMPOOL0 meta 0 pkt 2176 pool 32K cache 256 numa 0 ethdev 0000:02:04.0 rxq 1 128 MEMPOOL0 txq 1 512 promiscuous on ethdev 0000:02:05.0 rxq 1 128 MEMPOOL0 txq 1 512 promiscuous on pipeline PIPELINE0 build lib ./l2fwd.so io ./ethdev.io numa 0 pipeline PIPELINE0 enable thread 1 ---------------------------------------- the l2fwd.spec file is same to the copy in repo, nothing changed. 0000:02:04.0 and 0000:02:05.0 are the NICs bound to DPDK. the command to run the application like below: sudo ./pipeline -c 0x3 -- -s ./l2fwd.cli and no errors show up after the command executed. The question is: I connected to hosts to the two NICs bound to DPDK, set IP addresses on both hosts, and attemped to ping between the two hosts, but it failed. From the comment in the l2fwd.spec file, I guess this spec file has the very same function like the L2fwd sample application, with whitch I can ping between two hosts. (16. L2 Forwarding Sample Application (in Real and Virtualized Environments) ¡ª Data Plane Development Kit 24.03.0-rc3 documentation (dpdk.org)). However the pipeline does not work like what I guess. So, is it my understanding that's flawed, or is it my setup? Thanks in advance. [-- Attachment #2: Type: text/html, Size: 3857 bytes --]
[-- Attachment #1: Type: text/plain, Size: 2816 bytes --] Hi Guvenc, From: Guvenc Gulce <guvenc.gulce@gmail.com> Sent: Monday, March 18, 2024 6:26 PM To: users@dpdk.org Cc: Suanming Mou <suanmingm@nvidia.com>; Ori Kam <orika@nvidia.com> Subject: mlx5: rte_flow template/async API raw_encap validation bug ? Hi all, It is great that we have rte_flow async/template api integrated to mlx5 driver code and it is being established as the new standard rte_flow API. I have the following raw_encap problem when using the rte_flow async/template API with mlx5 driver: - raw_encap rte_flow action template fails during validation when the action mask conf is NULL but this clearly contradicts the explanation from Suanming Mou's commit 7f6daa490d9 which clearly states that the raw encap action mask is allowed to be NULL. <Excerpt from commit 7f6daa490d9> 2. RAW encap (encap_data: raw) action conf (raw_data) a. action mask conf (not NULL) - encap_data constant. b. action mask conf (NULL) - encap_data will change. </Excerpt from commit 7f6daa490d9> Commenting out the raw_encap validation would make it possible to create rte_flow template with null mask conf which can be concretized later on. Things seem to work after relaxing the rte_flow raw_encap validation. The change would look like: [Suanming] I guess maybe it is due to the raw_encap and raw_decap combination. I added Gregory who added that code maybe can explain it better. @Gregory Etelson<mailto:getelson@nvidia.com> <Excerpt> diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 35f1ed7a03..3f57fd9286 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -6020,10 +6020,10 @@ flow_hw_validate_action_raw_encap(const struct rte_flow_action *action, const struct rte_flow_action_raw_encap *mask_conf = mask->conf; const struct rte_flow_action_raw_encap *action_conf = action->conf; - if (!mask_conf || !mask_conf->size) +/* if (!mask_conf || !mask_conf->size) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, mask, - "raw_encap: size must be masked"); + "raw_encap: size must be masked"); */ if (!action_conf || !action_conf->size) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, action, </Excerpt> But this can not be the proper solution. Please advise a solution how to make the raw_encap work with rte_flow template/async API. If relaxing the validation is ok, I can also prepare and send a patch. Thanks in advance, Guvenc Gulce [-- Attachment #2: Type: text/html, Size: 9692 bytes --]
[-- Attachment #1: Type: text/plain, Size: 2311 bytes --] Hi all, It is great that we have rte_flow async/template api integrated to mlx5 driver code and it is being established as the new standard rte_flow API. I have the following raw_encap problem when using the rte_flow async/template API with mlx5 driver: - raw_encap rte_flow action template fails during validation when the action mask conf is NULL but this clearly contradicts the explanation from Suanming Mou's commit 7f6daa490d9 which clearly states that the raw encap action mask is allowed to be NULL. <Excerpt from commit 7f6daa490d9> 2. RAW encap (encap_data: raw) action conf (raw_data) a. action mask conf (not NULL) - encap_data constant. b. action mask conf (NULL) - encap_data will change. </Excerpt from commit 7f6daa490d9> Commenting out the raw_encap validation would make it possible to create rte_flow template with null mask conf which can be concretized later on. Things seem to work after relaxing the rte_flow raw_encap validation. The change would look like: <Excerpt> diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c index 35f1ed7a03..3f57fd9286 100644 --- a/drivers/net/mlx5/mlx5_flow_hw.c +++ b/drivers/net/mlx5/mlx5_flow_hw.c @@ -6020,10 +6020,10 @@ flow_hw_validate_action_raw_encap(const struct rte_flow_action *action, const struct rte_flow_action_raw_encap *mask_conf = mask->conf; const struct rte_flow_action_raw_encap *action_conf = action->conf; - if (!mask_conf || !mask_conf->size) +/* if (!mask_conf || !mask_conf->size) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, mask, - "raw_encap: size must be masked"); + "raw_encap: size must be masked"); */ if (!action_conf || !action_conf->size) return rte_flow_error_set(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION, action, </Excerpt> But this can not be the proper solution. Please advise a solution how to make the raw_encap work with rte_flow template/async API. If relaxing the validation is ok, I can also prepare and send a patch. Thanks in advance, Guvenc Gulce [-- Attachment #2: Type: text/html, Size: 3667 bytes --]
[-- Attachment #1: Type: text/plain, Size: 882 bytes --] Hello Everyone, Can anybody please share the compatibility matrix for i40en (Vmware native driver) with the DPDK version. I am trying XL710 nic card 10G dual port but facing issues. 2024/03/15 10:06:12:842 notice dpdk iavf_configure_queues(): RXDID[22] is not supported, request default RXDID[1] in Queue[0] 2024/03/15 10:06:12:842 notice dpdk iavf_configure_queues(): RXDID[22] is not supported, request default RXDID[1] in Queue[1] 2024/03/15 10:06:12:842 notice dpdk iavf_configure_queues(): RXDID[22] is not supported, request default RXDID[1] in Queue[2] 2024/03/15 10:06:12:842 notice dpdk * iavf_configure_queues(): RXDID[22] is not supported, request default RXDID[1] in Queue[3]* *Vmware Details * Esxi - 7.0.3 i40en version - 1.14.1.0 NVM : 8.5 Any direction on this would be really appreciated. Thanks, Chetan [-- Attachment #2: Type: text/html, Size: 1268 bytes --]
2024-03-15 08:47 (UTC+0100), Jakob Wieckowski: > Hello DPDK Users, > > I have a question regarding the size of mbufs. > The mbuf are contained in the mempol and the mempool is a fixed size object. > > Could the mbuf be implemented dynamically with a variable size in the > mempool area ? > > In the header of the mbuf you could specify the size of the payload data > and thus could expand the size of the mbufs. > > From my understanding, you just have to keep an eye on the mempool memory > size so that it doesn't go beyond the limit of the allocated area. > > Would this be generally possible? Hi, If mbufs could be allocated of any size, mempool would be a general-purpose allocator, but there is a tradeoff between generality and performance, and rte_mempool pursuits the latter. The size of an mbuf data may only be adjusted, but only up to the size specified at mempool creation. There are solutions for some use cases: - using multiple mempools for objects of different size - some NICs can split packets allocating segments from different mempools - mbufs can be chained - rte_pktmbuf_attach_extbuf() + custom allocator - "memarea" proposed library [1] (looks very similar to what you describe) What is you usage scenario? [1]: http://inbox.dpdk.org/dev/20230720092254.54157-1-fengchengwen@huawei.com/
[-- Attachment #1: Type: text/plain, Size: 466 bytes --] Regarding this discussion in the dev list: https://www.mail-archive.com/dev@dpdk.org/msg283938.html I was told from Intel Premier Support to ask my question here. We are seeing this E810 link issue where cables are pulled and replaced and link status is missed. When I patch in this suggested workaround to the PMD, the issue goes away. Is this patch safe for delivery to production environments? Or is it better to wait for the firmware update? Thanks. Eric [-- Attachment #2: Type: text/html, Size: 689 bytes --]
Hello Carlos, John, On Tue, Mar 12, 2024 at 9:10 PM Carlos de Souza Moraes Neto <carlosmn@weg.net> wrote: > > I think i messed up in some part and know it is working. I'm using testpmd to read tagged packets (IEC Sampled Values) that come to that port. Carlos, Thanks for the confirmation that reverting de5da9d16430 works for you. I opened a bz: https://bugs.dpdk.org/show_bug.cgi?id=1402 John, We mentionned this (E810 vlan stripping) issue during the maintainers call this morning. Please can you find someone at Intel to look into it? Thanks. -- David Marchand
Hi, I am revisiting this issue I have been living with. The workaround I am using is to not memlock memory in our application. Error is "VIRT memory is too high and mmap fails, Cannot allocate memory (12)." I tried DPDK 23.11 and I see the same issue I see with DPDK 22.11. In meson I tried setting -Db_lto=true -Dbuildtype=minsize but did not help my issue. We have only 16 GB of memory and I setup 2x1GB=2GB hugepage size, (legacy mode EAL setting). Also running Oracle91 kernel 5.14.0-284. I see same the issue on virtual VM on Intel host, and on hardware bare metal with Atom processor. (Both running Oracle91) Can I reduce VIRT memory if I switch from static libraries to shared libraries? Any help will be greatly appreciated. Thanks, Ed -----Original Message----- From: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com> Sent: Friday, November 10, 2023 4:38 AM To: Lombardo, Ed <Ed.Lombardo@netscout.com> Cc: users <users@dpdk.org> Subject: Re: DPDK 22.11.2 requires too much VIRT memory, how to reduce External Email: This message originated outside of NETSCOUT. Do not click links or open attachments unless you recognize the sender and know the content is safe. 2023-11-10 12:31 (UTC+0300), Dmitry Kozlyuk: > Hi Ed, > > 2023-11-10 00:16 (UTC+0000), Lombardo, Ed: > > I finally finished testing all the options and found the VIRT value > > can be reduced from 66 GB to 16 GB with --legacy-mem setting in EAL init argument. > > Right. > By default, DPDK can use up to 64 GB of hugepage memory, so it > reserves 64 GB of VIRT (but does not map most of it), RES should be > low until the app actually allocates something. > In legacy mode, DPDK maps all available hugepage memory at startup, in > your case 16 GB, VIRT and RES should be close. > > > So I therefore had to increase the VM memory from 16 GB to 24 GB > > (instead of 80 GB without this setting). > > I don't understand why you have to do that. > Possible VIRT is not limited by available RAM. > DPDK should be able to reserve 64 GB of VIRT on a machine with 16 GB > of RAM, it will just be unable to map more than 16 GB (obviously). Sorry, I've sent the message early by mistake. > > I wonder what do we give up with this setting? Most importantly, in legacy mode DPDK will consume all available hugepages at startup and will not free them back to the system until the all is terminated. The default dynamic mode allocates and frees physical RAM on demand. Some advanced DPDK memory API don't work in legacy mode. > > > > All the other settings I tried and combinations of these had no > > impact (socket-limit=2048, single-file-segments, no-shconf, and > > no-telemetry) on VIRT memory. Right, they should not. DPDK assumes that VIRT reservation is almost free and unlimited. May it be that your system somehow limits it?
Hi Team,
Gentle reminder! Can somebody please help us on this query.
We are validating the fdir with test-pmd tool. We are getting below error while trying to add the flow create rule for the same.
Syntax:
testpmd> flow create 0 ingress pattern eth / ipv4 src is <ipv4 src> ipv4 dst is <ipv4 dst> / tcp src is <inner sport> dst is <inner sport> / end actions rss queues 2 3 end / end
rule:
flow create 0 ingress pattern eth / ipv4 src is 20.20.20.2 dst is 20.20.20.4 / tcp src is 80 dst is 1501 / end actions rss queues 0 1 2 3 end / end
port_flow_complain(): Caught PMD error type 16 (specific action): cause: 0x7fff9495c648, RSS types must be empty while configuring queue region: Operation not supported
Note: this is for i40e
Please help us on this.
Thanks,
Vajith
-----Original Message-----
From: Ferruh Yigit <ferruh.yigit@amd.com>
Sent: Monday, March 11, 2024 4:38 PM
To: Vajith Raghman <VAJITH.RAGHMAN@tatacommunications.com>; Beilei Xing <beilei.xing@intel.com>; Jeff Guo <jia.guo@intel.com>; Bruce Richardson <bruce.richardson@intel.com>
Cc: dev@dpdk.org; users@dpdk.org
Subject: Re: flow create with queue range not working
CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.
On 3/11/2024 10:32 AM, Vajith Raghman wrote:
> Hi Team,
>
> We are validating the fdir with test-pmd tool. We are getting below
> error while trying to add the flow create rule for the same.
>
> Syntax:
>
> testpmd>flow create 0ingress pattern eth / ipv4 src *is*<ipv4 src>ipv4
> dst *is*<ipv4 dst>/tcp src *is*<inner sport> dst *is*<inner sport>/end
> actions rss queues 23end /end
>
> rule:
>
> flow create 0 ingress pattern eth / ipv4 src is 20.20.20.2 dst is
> 20.20.20.4 / tcp src is 80 dst is 1501 / end actions rss queues 0 1 2
> 3 end / end
>
> port_flow_complain(): Caught PMD error type 16 (specific action): cause:
> 0x7fff9495c648, RSS types must be empty while configuring queue region:
> Operation not supported
>
> Please help us on this.
>
I guess the error from i40e, cc'ing relevant maintainers.
Hi David!
I think i messed up in some part and know it is working. I'm using testpmd to read tagged packets (IEC Sampled Values) that come to that port.
I'm using testpmd to read packets that arrive to the port.
This is the part to confirm I changed to 23.11 and revert de5da9d16430
root@SAL7:~/downloads/dpdk# git checkout v23.11
Previous HEAD position was f262f16087 version: 22.11.0
HEAD is now at eeb0605f11 version: 23.11.0
root@SAL7:~/downloads/dpdk# cat VERSION
23.11.0
root@SAL7:~/downloads/dpdk# git revert de5da9d16430
Auto-merging drivers/net/ice/ice_ethdev.c
Auto-merging drivers/net/ice/ice_ethdev.h
[detached HEAD 1b41a38e69] Revert "net/ice: support double VLAN"
2 files changed, 15 insertions(+), 408 deletions(-)
After compiling and installing it I start testpmd with
root@SAL7:~/downloads/dpdk/build# dpdk-testpmd -a 0000:01:00.0 -- --enable-hw-vlan-strip -i
EAL: Detected CPU lcores: 8
EAL: Detected NUMA nodes: 1
EAL: Detected static linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: VFIO support initialized
EAL: Using IOMMU type 1 (Type 1)
EAL: Probe PCI driver: net_ice (8086:159b) device: 0000:01:00.0 (socket -1)
ice_load_pkg_type(): Active package is: 1.3.35.0, ICE OS Default Package (single VLAN mode)
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
Warning: NUMA should be configured manually by using --port-numa-config and --ring-numa-config parameters along with --numa.
testpmd: create a new mbuf pool <mb_pool_0>: n=203456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
Warning! port-topology=paired and odd forward ports number, the last port will pair with itself.
Configuring Port 0 (socket 0)
ice_set_rx_function(): Using AVX2 OFFLOAD Vector Rx (port 0).
Port 0: link state change event
Port 0: B4:96:91:EB:84:94
Checking link statuses...
Done
testpmd> set promisc all on
testpmd> set verbose 1
Change verbose level from 0 to 1
testpmd>
testpmd> start
io packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support enabled, MP allocation mode: native
Logical Core 1 (socket 0) forwards packets on 1 streams:
RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00
src=B4:B1:5A:18:04:3B - dst=01:0C:CD:04:00:01 - pool=mb_pool_0 - type=0x88ba - length=231 - nb_segs=1 - VLAN tci=0xe003 - hw ptype: L2_ETHER - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x0
ol_flags: RTE_MBUF_F_RX_VLAN RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_VLAN_STRIPPED RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD
port 0/queue 0: received 1 packets
src=B4:B1:5A:18:04:3B - dst=01:0C:CD:04:00:00 - pool=mb_pool_0 - type=0x88ba - length=863 - nb_segs=1 - VLAN tci=0x6004 - hw ptype: L2_ETHER - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x0
ol_flags: RTE_MBUF_F_RX_VLAN RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_VLAN_STRIPPED RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD
port 0/queue 0: received 1 packets
src=B4:B1:5A:18:04:3B - dst=01:0C:CD:04:00:01 - pool=mb_pool_0 - type=0x88ba - length=231 - nb_segs=1 - VLAN tci=0xe003 - hw ptype: L2_ETHER - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x0
ol_flags: RTE_MBUF_F_RX_VLAN RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_VLAN_STRIPPED RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD
port 0/queue 0: received 1 packets
src=B4:B1:5A:18:04:3B - dst=01:0C:CD:04:00:00 - pool=mb_pool_0 - type=0x88ba - length=863 - nb_segs=1 - VLAN tci=0x6004 - hw ptype: L2_ETHER - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x0
ol_flags: RTE_MBUF_F_RX_VLAN RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_VLAN_STRIPPED RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD
port 0/queue 0: received 1 packets
src=B4:B1:5A:18:04:3B - dst=01:0C:CD:04:00:01 - pool=mb_pool_0 - type=0x88ba - length=231 - nb_segs=1 - VLAN tci=0xe003 - hw ptype: L2_ETHER - sw ptype: L2_ETHER - l2_len=14 - Receive queue=0x0
ol_flags: RTE_MBUF_F_RX_VLAN RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_VLAN_STRIPPED RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD
Atenciosamente,
Carlos de Souza Moraes Neto
Depto. Centro de Negócios de Subestações
Fone: +55 (47) 3276-5786 Celular: +55 (47) 99927-9354 skype: carlossmneto
WEG Equipamentos Elétricos S/A. - Transmissão & Distribuição
www.weg.net
-----Mensagem original-----
De: David Marchand <david.marchand@redhat.com>
Enviada em: terça-feira, 12 de março de 2024 08:52
Para: Carlos de Souza Moraes Neto <carlosmn@weg.net>
Cc: Bruce Richardson <bruce.richardson@intel.com>; Vladimir Medvedkin <vladimir.medvedkin@intel.com>; users@dpdk.org; Rafael Bonet Scheffer <rafaelbonet@weg.net>
Assunto: Re: [E810 Offload VLAN Stripping]
ATENÇÃO: Esta mensagem é de REMETENTE EXTERNO - Tenha cuidado ao abrir links e anexos.
*** NOVO *** NÃO digite sua SENHA WEG quando solicitada por E-MAIL EXTERNO
Hello Carlos,
On Mon, Mar 11, 2024 at 7:07 PM Carlos de Souza Moraes Neto <carlosmn@weg.net> wrote:
> It worked in 22.11 but when I tried 23.11 and revert de5da9d16430 it didn’t work.
Are you testing with testpmd?
If not, please double check first with testpmd.
And describe the traffic you are testing with.
With this, I hope Intel can reproduce your issue.
--
David Marchand
Hi David, It worked in 22.11 but when I tried 23.11 and revert de5da9d16430 it didn’t work. Sincerely, Carlos Moraes -----Mensagem original----- De: David Marchand <david.marchand@redhat.com> Enviada em: segunda-feira, 11 de março de 2024 12:11 Para: Carlos de Souza Moraes Neto <carlosmn@weg.net>; Bruce Richardson <bruce.richardson@intel.com>; Vladimir Medvedkin <vladimir.medvedkin@intel.com> Cc: users@dpdk.org Assunto: Re: [E810 Offload VLAN Stripping] ATENÇÃO: Esta mensagem é de REMETENTE EXTERNO - Tenha cuidado ao abrir links e anexos. *** NOVO *** NÃO digite sua SENHA WEG quando solicitada por E-MAIL EXTERNO Hello Carlos, On Mon, Mar 11, 2024 at 10:29 AM Carlos de Souza Moraes Neto <carlosmn@weg.net> wrote: > Hello, > Adding some Intel folks. > I'm currently working on enabling the RTE_ETH_RX_OFFLOAD_VLAN_STRIP offload feature to strip VLAN tags and store the VLAN information in the vlan_tci field while using an Intel E810-XXVDA2 NIC. However, the VLAN tags are not being stripped. I’ve already tried to update DPDK (23.11), E810 firmware (4.40), ICE and DDP (1.3.35.0) but nothing. My console output is: > > EAL: Detected CPU lcores: 8 > EAL: Detected NUMA nodes: 1 > EAL: Detected shared linkage of DPDK > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Selected IOVA mode 'VA' > EAL: VFIO support initialized > EAL: Using IOMMU type 1 (Type 1) > EAL: Probe PCI driver: net_ice (8086:159b) device: 0000:01:00.0 > (socket -1) > ice_load_pkg_type(): Active package is: 1.3.35.0, ICE OS Default > Package (single VLAN mode) > EAL: Probe PCI driver: net_ice (8086:159b) device: 0000:01:00.1 > (socket -1) > ice_load_pkg_type(): Active package is: 1.3.35.0, ICE OS Default > Package (single VLAN mode) > EAL: Probe PCI driver: net_e1000_igb (8086:1521) device: 0000:02:00.0 > (socket -1) > EAL: Probe PCI driver: net_e1000_igb (8086:1521) device: 0000:02:00.1 > (socket -1) > EAL: Probe PCI driver: net_e1000_igb (8086:1521) device: 0000:02:00.2 > (socket -1) > EAL: Probe PCI driver: net_e1000_igb (8086:1521) device: 0000:02:00.3 > (socket -1) > TELEMETRY: No legacy callbacks, legacy socket not created > ice_set_rx_function(): Using AVX2 OFFLOAD Vector Rx (port 0). > ice_set_tx_function(): Using AVX2 OFFLOAD Vector Tx (port 0). > ice_vsi_config_outer_vlan_stripping(): Single VLAN mode (SVM) does not > support qinq > ice_set_rx_function(): Using AVX2 OFFLOAD Vector Rx (port 1). > ice_set_tx_function(): Using AVX2 OFFLOAD Vector Tx (port 1). > ice_vsi_config_outer_vlan_stripping(): Single VLAN mode (SVM) does not > support qinq I think I reproduce your issue (though I see messages claiming support for double VLAN in my setup). I tested with testpmd in v23.11 and a E810 nic: 04:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-XXV for SFP (rev 02) # ./build/app/dpdk-testpmd -a 0000:04:00.0 -- --enable-hw-vlan-strip -i ... ice_load_pkg_type(): Active package is: 1.3.30.0, ICE OS Default Package (double VLAN mode) ... testpmd> set verbose 1 Change verbose level from 0 to 1 testpmd> start ... port 0/queue 0: received 1 packets src=00:00:00:00:00:00 - dst=FF:FF:FF:FF:FF:FF - pool=mb_pool_0 - type=0x8100 - length=60 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG - sw ptype: L2_ETHER_VLAN L3_IPV4 - l2_len=18 - l3_len=20 - Receive queue=0x0 ol_flags: RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD I did a bisect, v22.11 worked for me and I ended up on: de5da9d16430 ("net/ice: support double VLAN") as the first bad commit. Reverting it restores vlan stripping for me on RHEL9. port 0/queue 0: received 1 packets src=00:00:00:00:00:00 - dst=FF:FF:FF:FF:FF:FF - pool=mb_pool_0 - type=0x0800 - length=60 - nb_segs=1 - VLAN tci=0x2a - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG - sw ptype: L2_ETHER L3_IPV4 - l2_len=14 - l3_len=20 - Receive queue=0x0 ol_flags: RTE_MBUF_F_RX_VLAN RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_VLAN_STRIPPED RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD Carlos, do you see the same? Bruce, Vladimir, could you have a look and confirm on your side? Thanks! -- David Marchand
[-- Attachment #1: Type: text/plain, Size: 713 bytes --] Hi Team, We are validating the fdir with test-pmd tool. We are getting below error while trying to add the flow create rule for the same. Syntax: testpmd> flow create 0 ingress pattern eth / ipv4 src is <ipv4 src> ipv4 dst is <ipv4 dst> / tcp src is <inner sport> dst is <inner sport> / end actions rss queues 2 3 end / end rule: flow create 0 ingress pattern eth / ipv4 src is 20.20.20.2 dst is 20.20.20.4 / tcp src is 80 dst is 1501 / end actions rss queues 0 1 2 3 end / end port_flow_complain(): Caught PMD error type 16 (specific action): cause: 0x7fff9495c648, RSS types must be empty while configuring queue region: Operation not supported Please help us on this. Thanks, Vajith [-- Attachment #2: Type: text/html, Size: 8378 bytes --]
Hello Carlos,
On Mon, Mar 11, 2024 at 7:07 PM Carlos de Souza Moraes Neto
<carlosmn@weg.net> wrote:
> It worked in 22.11 but when I tried 23.11 and revert de5da9d16430 it didn’t work.
Are you testing with testpmd?
If not, please double check first with testpmd.
And describe the traffic you are testing with.
With this, I hope Intel can reproduce your issue.
--
David Marchand
Hi Stephen,
Thank you!
Will there be any conflict occur between rss and the flow pattern that I am going to create.
RSS and flow rule having same criteria which one takes the priority
Regards,
Bala
-----Original Message-----
From: Stephen Hemminger <stephen@networkplumber.org>
Sent: Monday, March 11, 2024 9:16 PM
To: Balakrishnan K <Balakrishnan.K1@tatacommunications.com>
Cc: dev@dpdk.org; users@dpdk.org
Subject: Re: is RSS and Flow director can work together
CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe.
On Mon, 11 Mar 2024 09:17:01 +0000
Balakrishnan K <Balakrishnan.K1@tatacommunications.com> wrote:
> Hi All,
> I want to use the dpdk application with RSS and flow director.
> is possible to use both at a time in application.
> In RSS, I am using
> action_rss_tcp.types = ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY
> | ETH_RSS_L3_DST_ONLY; to receive the similar traffic to same core.
> One specific case where I wanted to distribute the traffic across
> core, here the incoming traffic having same src and dst IP Example( src ip : 10.10.10.1 dst ip :20.20.20.2) .
> With RSS enabled all the traffic going to end up in one core ,where the remaining cores are being idle impacting the performance.
> Planning enable flow director and create rule to distribute the traffic for the combination src /dst ip (10.10.10.1 /20.20.20.2) along with RSS.
>
> if RSS and flow rule having same criteria which one takes the priority .
>
> Regards,
> Bala
You can do that with rte_flow action of rte_flow_action_rss.
On Mon, 11 Mar 2024 09:17:01 +0000
Balakrishnan K <Balakrishnan.K1@tatacommunications.com> wrote:
> Hi All,
> I want to use the dpdk application with RSS and flow director.
> is possible to use both at a time in application.
> In RSS, I am using
> action_rss_tcp.types = ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY;
> to receive the similar traffic to same core.
> One specific case where I wanted to distribute the traffic across core, here the incoming traffic having same src and dst IP
> Example( src ip : 10.10.10.1 dst ip :20.20.20.2) .
> With RSS enabled all the traffic going to end up in one core ,where the remaining cores are being idle impacting the performance.
> Planning enable flow director and create rule to distribute the traffic for the combination src /dst ip (10.10.10.1 /20.20.20.2) along with RSS.
>
> if RSS and flow rule having same criteria which one takes the priority .
>
> Regards,
> Bala
You can do that with rte_flow action of rte_flow_action_rss.
Hello Carlos, On Mon, Mar 11, 2024 at 10:29 AM Carlos de Souza Moraes Neto <carlosmn@weg.net> wrote: > Hello, > Adding some Intel folks. > I'm currently working on enabling the RTE_ETH_RX_OFFLOAD_VLAN_STRIP offload feature to strip VLAN tags and store the VLAN information in the vlan_tci field while using an Intel E810-XXVDA2 NIC. However, the VLAN tags are not being stripped. I’ve already tried to update DPDK (23.11), E810 firmware (4.40), ICE and DDP (1.3.35.0) but nothing. My console output is: > > EAL: Detected CPU lcores: 8 > EAL: Detected NUMA nodes: 1 > EAL: Detected shared linkage of DPDK > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > EAL: Selected IOVA mode 'VA' > EAL: VFIO support initialized > EAL: Using IOMMU type 1 (Type 1) > EAL: Probe PCI driver: net_ice (8086:159b) device: 0000:01:00.0 (socket -1) > ice_load_pkg_type(): Active package is: 1.3.35.0, ICE OS Default Package (single VLAN mode) > EAL: Probe PCI driver: net_ice (8086:159b) device: 0000:01:00.1 (socket -1) > ice_load_pkg_type(): Active package is: 1.3.35.0, ICE OS Default Package (single VLAN mode) > EAL: Probe PCI driver: net_e1000_igb (8086:1521) device: 0000:02:00.0 (socket -1) > EAL: Probe PCI driver: net_e1000_igb (8086:1521) device: 0000:02:00.1 (socket -1) > EAL: Probe PCI driver: net_e1000_igb (8086:1521) device: 0000:02:00.2 (socket -1) > EAL: Probe PCI driver: net_e1000_igb (8086:1521) device: 0000:02:00.3 (socket -1) > TELEMETRY: No legacy callbacks, legacy socket not created > ice_set_rx_function(): Using AVX2 OFFLOAD Vector Rx (port 0). > ice_set_tx_function(): Using AVX2 OFFLOAD Vector Tx (port 0). > ice_vsi_config_outer_vlan_stripping(): Single VLAN mode (SVM) does not support qinq > ice_set_rx_function(): Using AVX2 OFFLOAD Vector Rx (port 1). > ice_set_tx_function(): Using AVX2 OFFLOAD Vector Tx (port 1). > ice_vsi_config_outer_vlan_stripping(): Single VLAN mode (SVM) does not support qinq I think I reproduce your issue (though I see messages claiming support for double VLAN in my setup). I tested with testpmd in v23.11 and a E810 nic: 04:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-XXV for SFP (rev 02) # ./build/app/dpdk-testpmd -a 0000:04:00.0 -- --enable-hw-vlan-strip -i ... ice_load_pkg_type(): Active package is: 1.3.30.0, ICE OS Default Package (double VLAN mode) ... testpmd> set verbose 1 Change verbose level from 0 to 1 testpmd> start ... port 0/queue 0: received 1 packets src=00:00:00:00:00:00 - dst=FF:FF:FF:FF:FF:FF - pool=mb_pool_0 - type=0x8100 - length=60 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG - sw ptype: L2_ETHER_VLAN L3_IPV4 - l2_len=18 - l3_len=20 - Receive queue=0x0 ol_flags: RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD I did a bisect, v22.11 worked for me and I ended up on: de5da9d16430 ("net/ice: support double VLAN") as the first bad commit. Reverting it restores vlan stripping for me on RHEL9. port 0/queue 0: received 1 packets src=00:00:00:00:00:00 - dst=FF:FF:FF:FF:FF:FF - pool=mb_pool_0 - type=0x0800 - length=60 - nb_segs=1 - VLAN tci=0x2a - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_NONFRAG - sw ptype: L2_ETHER L3_IPV4 - l2_len=14 - l3_len=20 - Receive queue=0x0 ol_flags: RTE_MBUF_F_RX_VLAN RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD RTE_MBUF_F_RX_VLAN_STRIPPED RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD Carlos, do you see the same? Bruce, Vladimir, could you have a look and confirm on your side? Thanks! -- David Marchand
On 3/11/2024 10:32 AM, Vajith Raghman wrote:
> Hi Team,
>
> We are validating the fdir with test-pmd tool. We are getting below
> error while trying to add the flow create rule for the same.
>
> Syntax:
>
> testpmd>flow create 0ingress pattern eth / ipv4 src *is*<ipv4 src>ipv4
> dst *is*<ipv4 dst>/tcp src *is*<inner sport> dst *is*<inner sport>/end
> actions rss queues 23end /end
>
> rule:
>
> flow create 0 ingress pattern eth / ipv4 src is 20.20.20.2 dst is
> 20.20.20.4 / tcp src is 80 dst is 1501 / end actions rss queues 0 1 2 3
> end / end
>
> port_flow_complain(): Caught PMD error type 16 (specific action): cause:
> 0x7fff9495c648, RSS types must be empty while configuring queue region:
> Operation not supported
>
> Please help us on this.
>
I guess the error from i40e, cc'ing relevant maintainers.
[-- Attachment #1: Type: text/plain, Size: 1809 bytes --] Hello, I'm currently working on enabling the RTE_ETH_RX_OFFLOAD_VLAN_STRIP offload feature to strip VLAN tags and store the VLAN information in the vlan_tci field while using an Intel E810-XXVDA2 NIC. However, the VLAN tags are not being stripped. I've already tried to update DPDK (23.11), E810 firmware (4.40), ICE and DDP (1.3.35.0) but nothing. My console output is: EAL: Detected CPU lcores: 8 EAL: Detected NUMA nodes: 1 EAL: Detected shared linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: VFIO support initialized EAL: Using IOMMU type 1 (Type 1) EAL: Probe PCI driver: net_ice (8086:159b) device: 0000:01:00.0 (socket -1) ice_load_pkg_type(): Active package is: 1.3.35.0, ICE OS Default Package (single VLAN mode) EAL: Probe PCI driver: net_ice (8086:159b) device: 0000:01:00.1 (socket -1) ice_load_pkg_type(): Active package is: 1.3.35.0, ICE OS Default Package (single VLAN mode) EAL: Probe PCI driver: net_e1000_igb (8086:1521) device: 0000:02:00.0 (socket -1) EAL: Probe PCI driver: net_e1000_igb (8086:1521) device: 0000:02:00.1 (socket -1) EAL: Probe PCI driver: net_e1000_igb (8086:1521) device: 0000:02:00.2 (socket -1) EAL: Probe PCI driver: net_e1000_igb (8086:1521) device: 0000:02:00.3 (socket -1) TELEMETRY: No legacy callbacks, legacy socket not created ice_set_rx_function(): Using AVX2 OFFLOAD Vector Rx (port 0). ice_set_tx_function(): Using AVX2 OFFLOAD Vector Tx (port 0). ice_vsi_config_outer_vlan_stripping(): Single VLAN mode (SVM) does not support qinq ice_set_rx_function(): Using AVX2 OFFLOAD Vector Rx (port 1). ice_set_tx_function(): Using AVX2 OFFLOAD Vector Tx (port 1). ice_vsi_config_outer_vlan_stripping(): Single VLAN mode (SVM) does not support qinq Sincerely, Carlos Moraes [-- Attachment #2: Type: text/html, Size: 5597 bytes --]
[-- Attachment #1: Type: text/plain, Size: 843 bytes --] Hi All, I want to use the dpdk application with RSS and flow director. is possible to use both at a time in application. In RSS, I am using action_rss_tcp.types = ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY; to receive the similar traffic to same core. One specific case where I wanted to distribute the traffic across core, here the incoming traffic having same src and dst IP Example( src ip : 10.10.10.1 dst ip :20.20.20.2) . With RSS enabled all the traffic going to end up in one core ,where the remaining cores are being idle impacting the performance. Planning enable flow director and create rule to distribute the traffic for the combination src /dst ip (10.10.10.1 /20.20.20.2) along with RSS. if RSS and flow rule having same criteria which one takes the priority . Regards, Bala [-- Attachment #2: Type: text/html, Size: 4920 bytes --]
[-- Attachment #1: Type: text/plain, Size: 4243 bytes --] Hi Stephen, Thank you for the quick response! I was going to use this network card just for testing. laptop :: ~ % sudo lspci -n -s 00:1f.6 00:1f.6 0200: 8086:0d4f I will check dpdk and kernel sources. On Fri, Mar 8, 2024 at 10:14 PM Stephen Hemminger < stephen@networkplumber.org> wrote: > On Fri, 8 Mar 2024 21:19:08 +0000 > sonntex <sonntex@gmail.com> wrote: > > > Hi, > > > > I am trying to configure dpdk on my laptop and get "no probed ethernet > > devices" in dpdk-testpmd utility: > > > > laptop :: ~ % sudo dpdk-testpmd -l 0-1 -n 4 --log-level=debug -- -i > > EAL: Detected CPU lcores: 8 > > EAL: Detected NUMA nodes: 1 > > EAL: Detected static linkage of DPDK > > EAL: Multi-process socket /var/run/dpdk/rte/mp_socket > > EAL: Selected IOVA mode 'VA' > > EAL: VFIO support initialized > > testpmd: No probed ethernet devices > > Interactive-mode selected > > testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176, > socket=0 > > testpmd: preferred mempool ops selected: ring_mp_mc > > Done > > testpmd> ... > > > > Checked that dpdk 23.07 supports this my NIC at > > http://doc.dpdk.org/guides/rel_notes/release_23_07.html: > > > > Intel Corporation Ethernet Connection (16) I219-V > > Firmware version: 0.6-4 > > Device id (pf): 8086:1a1f > > Driver version(in-tree): 5.15.113-rt64 (Ubuntu22.04.2)(e1000) > > > > Configuration: > > > > laptop :: ~ % pacman -Ss dpdk > > extra/dpdk 23.07-1 [installed] > > A set of libraries and drivers for fast packet processing > > > > laptop :: ~ % sudo ethtool -i enp0s31f6 > > driver: e1000e > > version: 6.7.8-arch1-1 > > firmware-version: 0.6-4 > > expansion-rom-version: > > bus-info: 0000:00:1f.6 > > supports-statistics: yes > > supports-test: yes > > supports-eeprom-access: yes > > supports-register-dump: yes > > supports-priv-flags: yes > > > > laptop :: ~ % sudo modprobe vfio-pci > > laptop :: ~ % sudo lsmod | grep vfio > > vfio_pci 16384 0 > > vfio_pci_core 86016 1 vfio_pci > > vfio_iommu_type1 45056 0 > > vfio 73728 3 vfio_pci_core,vfio_iommu_type1,vfio_pci > > iommufd 106496 1 vfio > > irqbypass 12288 2 vfio_pci_core,kvm > > > > laptop :: ~ % sudo dpdk-hugepages.py -m > > laptop :: ~ % sudo dpdk-hugepages.py -p 2M --setup 1G > > laptop :: ~ % sudo dpdk-hugepages.py -s > > Node Pages Size Total > > 0 512 2Mb 1Gb > > Hugepages mounted on /dev/hugepages > > > > laptop :: ~ % sudo dpdk-devbind.py --status-dev net > > Network devices using kernel driver > > =================================== > > 0000:00:14.3 'Comet Lake PCH-LP CNVi WiFi 02f0' if=wlan0 drv=iwlwifi > > unused= *Active* > > 0000:00:1f.6 'Ethernet Connection (10) I219-V 0d4f' if=enp0s31f6 > drv=e1000e > > unused= > > > > laptop :: ~ % sudo dpdk-devbind.py -b vfio-pci 0000:00:1f.6 > > laptop :: ~ % sudo dpdk-devbind.py --status-dev net > > Network devices using DPDK-compatible driver > > ============================================ > > 0000:00:1f.6 'Ethernet Connection (10) I219-V 0d4f' drv=vfio-pci > > unused=e1000e > > Network devices using kernel driver > > =================================== > > 0000:00:14.3 'Comet Lake PCH-LP CNVi WiFi 02f0' if=wlan0 drv=iwlwifi > > unused=vfio-pci *Active > > > > Any suggestions on what might be missing here? > > > > Thanks! > > Most likely the DPDK E1000 driver doesn't support the full range of PCI > device > id's as the kernel driver. What is PCI information for you? I have similar > device on this machine. > > $ lspci -n -s 00:1f.6 > 00:1f.6 0200: 8086:15fc (rev 20) > > In my case the part that matters is the 15fc. > Looking in DPDK drivers/net/e1000/base/e1000_hw.h, there is no #define for > that > type and no entry in drivers/net/e1000/em_ethdev.c:pci_id_em_map[] > > In linux kernel the entry is: > drivers/net/ethernet/intel/e1000e/hw.h:#define > E1000_DEV_ID_PCH_TGP_I219_V13 0x15FC > > The Intel drivers are not in sync. It is up to the E1000 DPDK > maintainers to solve. > > Note: this older E1000 hardware is not fast, and using DPDK > except as a test bed is really not worth it. > [-- Attachment #2: Type: text/html, Size: 5437 bytes --]
On Fri, 8 Mar 2024 21:19:08 +0000
sonntex <sonntex@gmail.com> wrote:
> Hi,
>
> I am trying to configure dpdk on my laptop and get "no probed ethernet
> devices" in dpdk-testpmd utility:
>
> laptop :: ~ % sudo dpdk-testpmd -l 0-1 -n 4 --log-level=debug -- -i
> EAL: Detected CPU lcores: 8
> EAL: Detected NUMA nodes: 1
> EAL: Detected static linkage of DPDK
> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
> EAL: Selected IOVA mode 'VA'
> EAL: VFIO support initialized
> testpmd: No probed ethernet devices
> Interactive-mode selected
> testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176, socket=0
> testpmd: preferred mempool ops selected: ring_mp_mc
> Done
> testpmd> ...
>
> Checked that dpdk 23.07 supports this my NIC at
> http://doc.dpdk.org/guides/rel_notes/release_23_07.html:
>
> Intel Corporation Ethernet Connection (16) I219-V
> Firmware version: 0.6-4
> Device id (pf): 8086:1a1f
> Driver version(in-tree): 5.15.113-rt64 (Ubuntu22.04.2)(e1000)
>
> Configuration:
>
> laptop :: ~ % pacman -Ss dpdk
> extra/dpdk 23.07-1 [installed]
> A set of libraries and drivers for fast packet processing
>
> laptop :: ~ % sudo ethtool -i enp0s31f6
> driver: e1000e
> version: 6.7.8-arch1-1
> firmware-version: 0.6-4
> expansion-rom-version:
> bus-info: 0000:00:1f.6
> supports-statistics: yes
> supports-test: yes
> supports-eeprom-access: yes
> supports-register-dump: yes
> supports-priv-flags: yes
>
> laptop :: ~ % sudo modprobe vfio-pci
> laptop :: ~ % sudo lsmod | grep vfio
> vfio_pci 16384 0
> vfio_pci_core 86016 1 vfio_pci
> vfio_iommu_type1 45056 0
> vfio 73728 3 vfio_pci_core,vfio_iommu_type1,vfio_pci
> iommufd 106496 1 vfio
> irqbypass 12288 2 vfio_pci_core,kvm
>
> laptop :: ~ % sudo dpdk-hugepages.py -m
> laptop :: ~ % sudo dpdk-hugepages.py -p 2M --setup 1G
> laptop :: ~ % sudo dpdk-hugepages.py -s
> Node Pages Size Total
> 0 512 2Mb 1Gb
> Hugepages mounted on /dev/hugepages
>
> laptop :: ~ % sudo dpdk-devbind.py --status-dev net
> Network devices using kernel driver
> ===================================
> 0000:00:14.3 'Comet Lake PCH-LP CNVi WiFi 02f0' if=wlan0 drv=iwlwifi
> unused= *Active*
> 0000:00:1f.6 'Ethernet Connection (10) I219-V 0d4f' if=enp0s31f6 drv=e1000e
> unused=
>
> laptop :: ~ % sudo dpdk-devbind.py -b vfio-pci 0000:00:1f.6
> laptop :: ~ % sudo dpdk-devbind.py --status-dev net
> Network devices using DPDK-compatible driver
> ============================================
> 0000:00:1f.6 'Ethernet Connection (10) I219-V 0d4f' drv=vfio-pci
> unused=e1000e
> Network devices using kernel driver
> ===================================
> 0000:00:14.3 'Comet Lake PCH-LP CNVi WiFi 02f0' if=wlan0 drv=iwlwifi
> unused=vfio-pci *Active
>
> Any suggestions on what might be missing here?
>
> Thanks!
Most likely the DPDK E1000 driver doesn't support the full range of PCI device
id's as the kernel driver. What is PCI information for you? I have similar
device on this machine.
$ lspci -n -s 00:1f.6
00:1f.6 0200: 8086:15fc (rev 20)
In my case the part that matters is the 15fc.
Looking in DPDK drivers/net/e1000/base/e1000_hw.h, there is no #define for that
type and no entry in drivers/net/e1000/em_ethdev.c:pci_id_em_map[]
In linux kernel the entry is:
drivers/net/ethernet/intel/e1000e/hw.h:#define E1000_DEV_ID_PCH_TGP_I219_V13 0x15FC
The Intel drivers are not in sync. It is up to the E1000 DPDK
maintainers to solve.
Note: this older E1000 hardware is not fast, and using DPDK
except as a test bed is really not worth it.
[-- Attachment #1: Type: text/plain, Size: 2628 bytes --] Hi, I am trying to configure dpdk on my laptop and get "no probed ethernet devices" in dpdk-testpmd utility: laptop :: ~ % sudo dpdk-testpmd -l 0-1 -n 4 --log-level=debug -- -i EAL: Detected CPU lcores: 8 EAL: Detected NUMA nodes: 1 EAL: Detected static linkage of DPDK EAL: Multi-process socket /var/run/dpdk/rte/mp_socket EAL: Selected IOVA mode 'VA' EAL: VFIO support initialized testpmd: No probed ethernet devices Interactive-mode selected testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176, socket=0 testpmd: preferred mempool ops selected: ring_mp_mc Done testpmd> ... Checked that dpdk 23.07 supports this my NIC at http://doc.dpdk.org/guides/rel_notes/release_23_07.html: Intel Corporation Ethernet Connection (16) I219-V Firmware version: 0.6-4 Device id (pf): 8086:1a1f Driver version(in-tree): 5.15.113-rt64 (Ubuntu22.04.2)(e1000) Configuration: laptop :: ~ % pacman -Ss dpdk extra/dpdk 23.07-1 [installed] A set of libraries and drivers for fast packet processing laptop :: ~ % sudo ethtool -i enp0s31f6 driver: e1000e version: 6.7.8-arch1-1 firmware-version: 0.6-4 expansion-rom-version: bus-info: 0000:00:1f.6 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: yes laptop :: ~ % sudo modprobe vfio-pci laptop :: ~ % sudo lsmod | grep vfio vfio_pci 16384 0 vfio_pci_core 86016 1 vfio_pci vfio_iommu_type1 45056 0 vfio 73728 3 vfio_pci_core,vfio_iommu_type1,vfio_pci iommufd 106496 1 vfio irqbypass 12288 2 vfio_pci_core,kvm laptop :: ~ % sudo dpdk-hugepages.py -m laptop :: ~ % sudo dpdk-hugepages.py -p 2M --setup 1G laptop :: ~ % sudo dpdk-hugepages.py -s Node Pages Size Total 0 512 2Mb 1Gb Hugepages mounted on /dev/hugepages laptop :: ~ % sudo dpdk-devbind.py --status-dev net Network devices using kernel driver =================================== 0000:00:14.3 'Comet Lake PCH-LP CNVi WiFi 02f0' if=wlan0 drv=iwlwifi unused= *Active* 0000:00:1f.6 'Ethernet Connection (10) I219-V 0d4f' if=enp0s31f6 drv=e1000e unused= laptop :: ~ % sudo dpdk-devbind.py -b vfio-pci 0000:00:1f.6 laptop :: ~ % sudo dpdk-devbind.py --status-dev net Network devices using DPDK-compatible driver ============================================ 0000:00:1f.6 'Ethernet Connection (10) I219-V 0d4f' drv=vfio-pci unused=e1000e Network devices using kernel driver =================================== 0000:00:14.3 'Comet Lake PCH-LP CNVi WiFi 02f0' if=wlan0 drv=iwlwifi unused=vfio-pci *Active Any suggestions on what might be missing here? Thanks! [-- Attachment #2: Type: text/html, Size: 3073 bytes --]
Hi all, I've made minimalist example app on how to set symmetric RSS support for X710 that uses RTE_FLOW rules - check it out here: https://github.com/lukashino/i40e-symmetric-rss-rte-flow Lukas On 08. 03. 24 6:53, Balakrishnan K wrote: > Hi Stephen, > Thanks for the response . I will below option and come back if any help required. > > Regards, > Bala > > -----Original Message----- > From: Stephen Hemminger <stephen@networkplumber.org> > Sent: Wednesday, March 6, 2024 8:34 PM > To: Balakrishnan K <Balakrishnan.K1@tatacommunications.com> > Cc: users@dpdk.org > Subject: Re: Symmetric RSS Hashing support in DPDK > > CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you recognize the sender and know the content is safe. > > On Wed, 6 Mar 2024 07:28:40 +0000 > Balakrishnan K <Balakrishnan.K1@tatacommunications.com> wrote: > >> Hello, >> Our application needs symmetric hashing to handle the reverse >> traffic on the same core, also to Improve performance by distributing the traffic across core. >> Tried using rss config as below . >> action_rss_tcp.types = ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_L3_SRC_ONLY| >> ETH_RSS_L3_DST_ONLY | ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY; but could not get desired result. >> Is there any options or API available to enable symmetric RSS hashing . >> We are using dpdk 20.11 and intel NIC X710 10GbE . >> >> Regards, >> Bala > With XL710 there are two choices: > 1. Set RSS hash function to RTE_ETH_HASH_SYMMETRIC_TOEPLITZ in > the rte_eth_rss_conf passed in during configure > 2. Use default (non symmetric TOEPLITZ) but pass in a rss_key that > has duplicated bits in the right place. Like: > > 0x6d5a 0x6d5a 0x6d5a 0x6d5a > 0x6d5a 0x6d5a 0x6d5a 0x6d5a > 0x6d5a 0x6d5a 0x6d5a 0x6d5a > 0x6d5a 0x6d5a 0x6d5a 0x6d5a > 0x6d5a 0x6d5a 0x6d5a 0x6d5a > > https://www.ndsl.kaist.edu/~kyoungsoo/papers/TR-symRSS.pdf