Hello Asaf, we want to mark packets both ways - in the software (by editing mbufs) or in the hardware (by ingress rules). Then we want to use that mark to match packets in egress where we want to change dst mac address or vlan vid. Is there any way to do it? Thank you, David On 13/09/2022 21:39, Asaf Penso wrote: > Hello David, > > Can we first understand what you would like to achieve? > Accordingly, we can suggest a way to do so. > > Regards, > Asaf Penso > >> -----Original Message----- >> From: David Vodak >> Sent: Tuesday, September 13, 2022 5:29 PM >> To: Matan Azrad ; Slava Ovsiienko >> >> Cc: dev@dpdk.org >> Subject: Egress RTE flow rule rule with mark in matching pattern on mlx5 >> >> Hello, >> >> I am trying to offload egress flow rule with mark in the matching >> pattern to nvidia NICs ConnectX-5 and ConnectX-6, but I keep getting the >> same results. I am using DPDK 21.11.1. >> >> If I try to offload this rule to mlx5 NIC, without any device arguments, >> it says that I need to enable extended metadata feature: >> >> >> # dpdk-testpmd -a 65:00.0 -- -i >> EAL: Detected CPU lcores: 40 >> EAL: Detected NUMA nodes: 1 >> EAL: Detected shared linkage of DPDK >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >> EAL: Selected IOVA mode 'PA' >> EAL: No free 1048576 kB hugepages reported on node 0 >> EAL: No available 1048576 kB hugepages reported >> EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:65:00.0 (socket 0) >> TELEMETRY: No legacy callbacks, legacy socket not created >> Interactive-mode selected >> testpmd: create a new mbuf pool : n=459456, size=2176, >> socket=0 >> testpmd: preferred mempool ops selected: ring_mp_mc >> >> Warning! port-topology=paired and odd forward ports number, the last >> port will pair with itself. >> >> Configuring Port 0 (socket 0) >> Port 0: 1C:34:DA:41:66:1C >> Checking link statuses... >> Done >> testpmd> flow create 0 egress group 0 pattern eth / mark id spec 4 id >> mask 4 / end actions set_mac_dst mac_addr FE:FE:CA:FE:FE:FE / end >> port_flow_complain(): Caught PMD error type 13 (specific pattern item): >> cause: 0x7ffc030aa198, extended metadata feature isn't enabled: >> Operation not supported >> >> >> If I try to run testpmd with dv_xmeta_en set to 1 or 2, I can only >> create that rule in group that does not equal 0. But I cannot offload a >> rule which contains JUMP action, so I cannot jump to group, where the >> rule with mark in the matching pattern can be offloaded. >> >> >> # dpdk-testpmd -a 65:00.0,dv_xmeta_en=1 -- -i >> EAL: Detected CPU lcores: 40 >> EAL: Detected NUMA nodes: 1 >> EAL: Detected shared linkage of DPDK >> EAL: Multi-process socket /var/run/dpdk/rte/mp_socket >> EAL: Selected IOVA mode 'PA' >> EAL: No free 1048576 kB hugepages reported on node 0 >> EAL: No available 1048576 kB hugepages reported >> EAL: Probe PCI driver: mlx5_pci (15b3:1017) device: 0000:65:00.0 (socket 0) >> TELEMETRY: No legacy callbacks, legacy socket not created >> Interactive-mode selected >> testpmd: create a new mbuf pool : n=459456, size=2176, >> socket=0 >> testpmd: preferred mempool ops selected: ring_mp_mc >> >> Warning! port-topology=paired and odd forward ports number, the last >> port will pair with itself. >> >> Configuring Port 0 (socket 0) >> Port 0: 1C:34:DA:41:66:1C >> Checking link statuses... >> Done >> testpmd> flow create 0 egress group 0 pattern eth / mark id spec 4 id >> mask 4 / end actions set_mac_dst mac_addr FE:FE:CA:FE:FE:FE / end >> port_flow_complain(): Caught PMD error type 1 (cause unspecified): >> cannot create modification action: Cannot allocate memory >> testpmd> flow create 0 egress group 1 pattern eth / mark id spec 4 id >> mask 4 / end actions set_mac_dst mac_addr FE:FE:CA:FE:FE:FE / end >> Flow rule #0 created >> testpmd> flow create 0 egress pattern eth / end actions jump group 1 / end >> port_flow_complain(): Caught PMD error type 1 (cause unspecified): >> cannot create modification action: Cannot allocate memory >> >> >> Is there any way how can I work this out or do I need to start using >> similar pattern items such as meta? >> >> Thank you, >> >> David