DPDK patches and discussions
 help / color / mirror / Atom feed
From: bugzilla@dpdk.org
To: dev@dpdk.org
Subject: [DPDK/other Bug 1846] [mlx5] adding a vlan filter offload causes packets to be steered towards the kernel
Date: Mon, 01 Dec 2025 16:29:21 +0000	[thread overview]
Message-ID: <bug-1846-3@http.bugs.dpdk.org/> (raw)

http://bugs.dpdk.org/show_bug.cgi?id=1846

            Bug ID: 1846
           Summary: [mlx5] adding a vlan filter offload causes packets to
                    be steered towards the kernel
           Product: DPDK
           Version: 24.11
          Hardware: All
                OS: All
            Status: UNCONFIRMED
          Severity: normal
          Priority: Normal
         Component: other
          Assignee: dev@dpdk.org
          Reporter: robin@jarry.cc
  Target Milestone: ---

When VLAN filter offload is enabled and a VLAN filter is configured on a VF,
unicast packets are no longer received by the DPDK driver. Packets go directly
to the kernel netdev. Note: when promisc is enabled, packets are received
again.

Hardware config
===============

SLOT          DRIVER     IFNAME        MAC                LINK/STATE  SPEED  
DEVICE
0000:18:00.0  mlx5_core  enp24s0f0np0  b8:3f:d2:fa:53:86  1/up        25Gb/s 
MT2894 Family [ConnectX-6 Lx]
0000:18:00.2  mlx5_core  enp24s0f0v0   02:aa:aa:aa:aa:00  1/up        25Gb/s 
ConnectX Family mlx5Gen Virtual Function

~# cat /sys/class/net/enp24s0f0np0/device/sriov_numvfs
1

~# devlink dev eswitch show pci/0000:18:00.0
pci/0000:18:00.0: mode legacy inline-mode none encap-mode basic

~# ip link show enp24s0f0np0
6: enp24s0f0np0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
mode DEFAULT group default qlen 1000
    link/ether b8:3f:d2:fa:53:86 brd ff:ff:ff:ff:ff:ff
    vf 0     link/ether 02:aa:aa:aa:aa:00 brd ff:ff:ff:ff:ff:ff, spoof checking
off, link-state auto, trust on, query_rss off

~# ip link show enp24s0f0v0
12: enp24s0f0v0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP
mode DEFAULT group default qlen 1000
    link/ether 02:aa:aa:aa:aa:00 brd ff:ff:ff:ff:ff:ff

How to reproduce
================

~# tcpdump -ltpnnei enp24s0f0v0 | sed 's/^/[kernel enp24s0f0v0] /' &

~# dpdk-testpmd -l 0,1 -a 0000:18:00.2 -- --forward-mode=rxonly -i
--enable-hw-vlan-filter
EAL: Detected CPU lcores: 40
EAL: Detected NUMA nodes: 1
EAL: Detected shared linkage of DPDK
EAL: Multi-process socket /var/run/dpdk/rte/mp_socket
EAL: Selected IOVA mode 'VA'
EAL: Probe PCI driver: mlx5_pci (15b3:101e) device: 0000:18:00.2 (socket 0)
TELEMETRY: No legacy callbacks, legacy socket not created
Set rxonly packet forwarding mode
Interactive-mode selected
testpmd: Flow tunnel offload support might be limited or unavailable on port 0
testpmd: create a new mbuf pool <mb_pool_0>: n=155456, size=2176, socket=0
testpmd: preferred mempool ops selected: ring_mp_mc

Warning! port-topology=paired and odd forward ports number, the last port will
pair with itself.

Configuring Port 0 (socket 0)
Port 0: 02:AA:AA:AA:AA:00
Checking link statuses...
Done

testpmd> set verbose 1
Change verbose level from 0 to 1

testpmd> set promisc all off

testpmd> start
rxonly packet forwarding - ports=1 - cores=1 - streams=1 - NUMA support
enabled, MP allocation mode: native
Logical Core 18 (socket 0) forwards packets on 1 streams:
  RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00

port 0/queue 0: received 1 packets
  src=2C:4C:15:07:98:70 - dst=01:80:C2:00:00:0E - pool=mb_pool_0 - type=0x88cc
- length=457 - nb_segs=1 - hw ptype: L2_ETHER  - sw ptype: L2_ETHER  -
l2_len=14 - Receive queue=0x0
  ol_flags: RTE_MBUF_F_RX_L4_CKSUM_UNKNOWN RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN
RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
  rxonly packet forwarding packets/burst=32
  nb forwarding cores=1 - nb forwarding ports=1
  port 0: RX queue number: 1 Tx queue number: 1
    Rx offloads=0x200 Tx offloads=0x10000
    RX queue: 0
      RX desc=256 - RX free threshold=64
      RX threshold registers: pthresh=0 hthresh=0  wthresh=0
      RX Offloads=0x200
    TX queue: 0
      TX desc=256 - TX free threshold=0
      TX threshold registers: pthresh=0 hthresh=0  wthresh=0
      TX offloads=0x10000 - TX RS bit threshold=0

<send 3 unicast packets towards 02:AA:AA:AA:AA:00>

port 0/queue 0: received 1 packets
  src=B4:45:06:FD:4A:66 - dst=02:AA:AA:AA:AA:00 - pool=mb_pool_0 - type=0x0800
- length=77 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw
ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive
queue=0x0
  ol_flags: RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD
RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
port 0/queue 0: received 1 packets
  src=B4:45:06:FD:4A:66 - dst=02:AA:AA:AA:AA:00 - pool=mb_pool_0 - type=0x0800
- length=77 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw
ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive
queue=0x0
  ol_flags: RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD
RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN
port 0/queue 0: received 1 packets
  src=B4:45:06:FD:4A:66 - dst=02:AA:AA:AA:AA:00 - pool=mb_pool_0 - type=0x0800
- length=77 - nb_segs=1 - hw ptype: L2_ETHER L3_IPV4_EXT_UNKNOWN L4_UDP  - sw
ptype: L2_ETHER L3_IPV4 L4_UDP  - l2_len=14 - l3_len=20 - l4_len=8 - Receive
queue=0x0
  ol_flags: RTE_MBUF_F_RX_L4_CKSUM_GOOD RTE_MBUF_F_RX_IP_CKSUM_GOOD
RTE_MBUF_F_RX_OUTER_L4_CKSUM_UNKNOWN

<packets are received by testpmd>

testpmd> rx_vlan add 42 0

<send the same 3 unicast packets towards 02:aa:aa:aa:aa:00>

[kernel enp24s0f0v0] b4:45:06:fd:4a:66 > 02:aa:aa:aa:aa:00, ethertype IPv4
(0x0800), length 77: 172.16.0.2.53 > 172.16.0.1.53: 27755 updateM [b2&3=0x6664]
[29547a] [27244q] [26212n] [27238au] [|domain]
[kernel enp24s0f0v0] b4:45:06:fd:4a:66 > 02:aa:aa:aa:aa:00, ethertype IPv4
(0x0800), length 77: 172.16.0.2.53 > 172.16.0.1.53: 27755 updateM [b2&3=0x6664]
[29547a] [27244q] [26212n] [27238au] [|domain]
[kernel enp24s0f0v0] b4:45:06:fd:4a:66 > 02:aa:aa:aa:aa:00, ethertype IPv4
(0x0800), length 77: 172.16.0.2.53 > 172.16.0.1.53: 27755 updateM [b2&3=0x6664]
[29547a] [27244q] [26212n] [27238au] [|domain]

<packets are now received by the kernel>

-- 
You are receiving this mail because:
You are the assignee for the bug.

                 reply	other threads:[~2025-12-01 16:29 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bug-1846-3@http.bugs.dpdk.org/ \
    --to=bugzilla@dpdk.org \
    --cc=dev@dpdk.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).