DPDK patches and discussions
 help / color / mirror / Atom feed
From: Gavin Li <gavinl@nvidia.com>
To: <matan@nvidia.com>, <viacheslavo@nvidia.com>, <orika@nvidia.com>,
	<thomas@monjalon.net>, Dariusz Sosnowski <dsosnowski@nvidia.com>,
	Bing Zhao <bingz@nvidia.com>, Suanming Mou <suanmingm@nvidia.com>,
	"Minggang Li (Gavin)" <gavinl@nvidia.com>
Cc: <dev@dpdk.org>, <rasland@nvidia.com>, <stable@dpdk.org>
Subject: [PATCH V3 2/2] net/mlx5: add support for flows targeting multicast MAC addresses
Date: Mon, 25 Aug 2025 17:13:22 +0300	[thread overview]
Message-ID: <20250825141322.974335-3-gavinl@nvidia.com> (raw)
In-Reply-To: <20250825141322.974335-1-gavinl@nvidia.com>

Rules for multicast MAC addresses are intended to filter multicast traffic
and are managed through multicast MAC add/remove APIs. In mlx5_dev_spawn
function, devices (PF, VFs, and SFs) retrieve the netdev-configured MAC
addresses via netlink and store them in the PMD device data, which
includes multicast MAC addresses.

Previously, flows for multicast MAC addresses were incorrectly disabled,
causing the multicast MAC add API to stop working. As a result, multicast
traffic directed to those multicast MAC addresses was not received.

To resolve this and update the multicast MAC address rules, create them
within mlx5_traffic_enable.

Fixes: 2d0665a7f771 ("net/mlx5: align PF and VF/SF MAC address handling")
Cc: stable@dpdk.org

Signed-off-by: Gavin Li <gavinl@nvidia.com>
---
 drivers/net/mlx5/mlx5_trigger.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 6c6f228afd..46479fbf09 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1813,7 +1813,10 @@ mlx5_traffic_enable(struct rte_eth_dev *dev)
 	for (i = 0; i != MLX5_MAX_MAC_ADDRESSES; ++i) {
 		struct rte_ether_addr *mac = &dev->data->mac_addrs[i];
 
-		if (!memcmp(mac, &cmp, sizeof(*mac)) || rte_is_multicast_ether_addr(mac))
+		/* Add flows for unicast and multicast mac addresses added by API. */
+		if (!memcmp(mac, &cmp, sizeof(*mac)) ||
+		    !BITFIELD_ISSET(priv->mac_own, i) ||
+		    (dev->data->all_multicast && rte_is_multicast_ether_addr(mac)))
 			continue;
 		memcpy(&unicast.hdr.dst_addr.addr_bytes,
 		       mac->addr_bytes,
-- 
2.34.1


  parent reply	other threads:[~2025-08-25 14:15 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-08-25 14:13 [PATCH V3 0/2] resolve flow creation issue for " Gavin Li
2025-08-25 14:13 ` [PATCH V3 1/2] net/mlx5: update how MAC address bit-fields are used Gavin Li
2025-08-25 14:54   ` Thomas Monjalon
2025-08-25 14:13 ` Gavin Li [this message]
2025-08-25 14:52   ` [PATCH V3 2/2] net/mlx5: add support for flows targeting multicast MAC addresses Thomas Monjalon

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20250825141322.974335-3-gavinl@nvidia.com \
    --to=gavinl@nvidia.com \
    --cc=bingz@nvidia.com \
    --cc=dev@dpdk.org \
    --cc=dsosnowski@nvidia.com \
    --cc=matan@nvidia.com \
    --cc=orika@nvidia.com \
    --cc=rasland@nvidia.com \
    --cc=stable@dpdk.org \
    --cc=suanmingm@nvidia.com \
    --cc=thomas@monjalon.net \
    --cc=viacheslavo@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).