DPDK patches and discussions
 help / color / mirror / Atom feed
* [Bug 1266] net/mlx5: cannot create rte_flow rule matching ethernet multicast with jump action on bond mode 4
@ 2023-07-19 13:48 bugzilla
  2025-10-27 14:07 ` [DPDK/ethdev Bug " bugzilla
  0 siblings, 1 reply; 2+ messages in thread
From: bugzilla @ 2023-07-19 13:48 UTC (permalink / raw)
  To: dev

[-- Attachment #1: Type: text/plain, Size: 3367 bytes --]

https://bugs.dpdk.org/show_bug.cgi?id=1266

            Bug ID: 1266
           Summary: net/mlx5: cannot create rte_flow rule matching
                    ethernet multicast with jump action on bond mode 4
           Product: DPDK
           Version: unspecified
          Hardware: x86
                OS: Linux
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: ethdev
          Assignee: dev@dpdk.org
          Reporter: vojanec@cesnet.cz
  Target Milestone: ---

[DPDK Version]
commit 238f67ca9cc00be4248b14d9ca4412edb7da62f6 (HEAD -> main, origin/main,
origin/HEAD)
Author: Ajit Khaparde <ajit.khaparde@broadcom.com>
Date:   Wed Jul 12 11:05:30 2023 -0700

    doc: update firmware version in bnxt guide

    Update earliest supported firmware version for 22.11 release.

    Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

[OS version]
Operating System: Oracle Linux Server 8.7
Kernel: Linux 4.18.0-348.12.2.el8_5.x86_64
Architecture: x86-64

[DPDK build]
meson build
ninja -C build 

[Network devices]
0000:3b:00.0 'MT2892 Family [ConnectX-6 Dx] 101d' if=ens1f0np0 drv=mlx5_core
unused= 
0000:3b:00.1 'MT2892 Family [ConnectX-6 Dx] 101d' if=ens1f1np1 drv=mlx5_core
unused= 

[OFED version]
MLNX_OFED_LINUX-5.7-1.0.2.0 (OFED-5.7-1.0.2)

[Reproduce in testpmd]
```
sudo ./dpdk-testpmd  -a 0000:3b:00.0 -a 0000:3b:00.1 -c 0x0f -n 4 --vdev
'net_bonding0,slave=0000:3b:00.0,slave=0000:3b:00.1,mode=4,agg_mode=count' --
-i --port-topology=chained
EAL: Detected CPU lcores: 40
EAL: Detected NUMA nodes: 2
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'PA'
EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:3b:00.0 (socket 0)
EAL: Probe PCI driver: mlx5_pci (15b3:101d) device: 0000:3b:00.1 (socket 0)
bond_ethdev_mode_set(1625) - Using mode 4, it is necessary to do TX burst and
RX burst at least every 100ms.
TELEMETRY: No legacy callbacks, legacy socket not created
Interactive-mode selected
testpmd: create a new mbuf pool <mb_pool_0>: n=171456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc
testpmd: create a new mbuf pool <mb_pool_1>: n=171456, size=2176, socket=1
testpmd: preferred mempool ops selected: ring_mp_mc
Configuring Port 0 (socket 0)
Port 0: 04:3F:72:C7:B8:84
Configuring Port 1 (socket 0)
Port 1: 04:3F:72:C7:B8:85
Configuring Port 2 (socket 0)
Device with port_id=0 already stopped

Port 2: link state change event
Device with port_id=1 already stopped
Port 2: 04:3F:72:C7:B8:84
Checking link statuses...
Done

testpmd> flow create 2 group 0 priority 0 ingress pattern eth dst spec
01:00:00:00:00:00 dst mask 01:00:00:00:00:00 / end actions jump group 1 / end
bond_flow_create(104) - Failed to create flow on slave 0
port_flow_complain(): Caught PMD error type 1 (cause unspecified): hardware
refuses to create flow: Invalid argument
```

[Notes]
When using a different action, such as 'rss' or 'queue', the rule is created
without any issues.
When using a different ethernet mask, such as '0F:00:00:00:00:00', the rule is
also created.
When using a different mode of the bonding PMD, the rules is created.

-- 
You are receiving this mail because:
You are the assignee for the bug.

[-- Attachment #2: Type: text/html, Size: 5413 bytes --]

^ permalink raw reply	[flat|nested] 2+ messages in thread

* [DPDK/ethdev Bug 1266] net/mlx5: cannot create rte_flow rule matching ethernet multicast with jump action on bond mode 4
  2023-07-19 13:48 [Bug 1266] net/mlx5: cannot create rte_flow rule matching ethernet multicast with jump action on bond mode 4 bugzilla
@ 2025-10-27 14:07 ` bugzilla
  0 siblings, 0 replies; 2+ messages in thread
From: bugzilla @ 2025-10-27 14:07 UTC (permalink / raw)
  To: dev

http://bugs.dpdk.org/show_bug.cgi?id=1266

mkashani@nvidia.com (mkashani@nvidia.com) changed:

           What    |Removed                     |Added
----------------------------------------------------------------------------
         Resolution|---                         |WONTFIX
                 CC|                            |mkashani@nvidia.com
             Status|UNCONFIRMED                 |RESOLVED

--- Comment #1 from mkashani@nvidia.com (mkashani@nvidia.com) ---
Hi, 
Sorry for the late response,

**Root Cause**: MLX5 hardware/firmware steering on root table (group 0) has
limited support for jump actions combined with partial MAC address masks. The
bonding PMD directly passes the flow to slave ports without adaptation.

## Immediate Workarounds

### Workaround 1: Use Group 1 (Recommended)
Instead of group 0, use group 1:

```bash
flow create 2 group 1 priority 0 ingress pattern eth dst spec 01:00:00:00:00:00
dst mask 01:00:00:00:00:00 / end actions jump group 2 / end
```

First, create a catch-all rule in group 0 to reach group 1:
```bash
flow create 2 group 0 priority 1 ingress pattern eth / end actions jump group 1
/ end
```

-- 
You are receiving this mail because:
You are the assignee for the bug.

^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2025-10-27 14:07 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-19 13:48 [Bug 1266] net/mlx5: cannot create rte_flow rule matching ethernet multicast with jump action on bond mode 4 bugzilla
2025-10-27 14:07 ` [DPDK/ethdev Bug " bugzilla

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).