From: Maayan Kashani <mkashani@nvidia.com>
To: <dev@dpdk.org>
Cc: <mkashani@nvidia.com>, <rasland@nvidia.com>, <stable@dpdk.org>,
"Dariusz Sosnowski" <dsosnowski@nvidia.com>,
Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
Bing Zhao <bingz@nvidia.com>, Ori Kam <orika@nvidia.com>,
Suanming Mou <suanmingm@nvidia.com>,
Matan Azrad <matan@nvidia.com>
Subject: [PATCH 4/4] net/mlx5: fix redundant control rules in promiscuous mode
Date: Mon, 12 Jan 2026 11:24:38 +0200 [thread overview]
Message-ID: <20260112092439.14843-5-mkashani@nvidia.com> (raw)
In-Reply-To: <20260112092439.14843-1-mkashani@nvidia.com>
When promiscuous mode is enabled, the device receives all traffic
regardless of destination MAC address. Previously, the code was
setting both the promiscuous flag AND the DMAC/multicast control
flow rules, which is redundant.
This patch makes the DMAC and multicast/broadcast control flow
rules conditional on NOT being in promiscuous mode. When promiscuous
mode is enabled, only the MLX5_CTRL_PROMISCUOUS flag is set.
Fixes: 9fa7c1cddb85 ("net/mlx5: create control flow rules with HWS")
Cc: stable@dpdk.org
Signed-off-by: Maayan Kashani <mkashani@nvidia.com>
---
drivers/net/mlx5/mlx5_trigger.c | 16 ++++++++++------
1 file changed, 10 insertions(+), 6 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_trigger.c b/drivers/net/mlx5/mlx5_trigger.c
index 028844e45d6..b38ba9022ea 100644
--- a/drivers/net/mlx5/mlx5_trigger.c
+++ b/drivers/net/mlx5/mlx5_trigger.c
@@ -1682,13 +1682,17 @@ mlx5_traffic_enable_hws(struct rte_eth_dev *dev)
dev->data->port_id, -ret);
goto error;
}
- if (dev->data->promiscuous)
+ if (dev->data->promiscuous) {
flags |= MLX5_CTRL_PROMISCUOUS;
- if (dev->data->all_multicast)
- flags |= MLX5_CTRL_ALL_MULTICAST;
- else
- flags |= MLX5_CTRL_BROADCAST | MLX5_CTRL_IPV4_MULTICAST | MLX5_CTRL_IPV6_MULTICAST;
- flags |= MLX5_CTRL_DMAC;
+ } else {
+ if (dev->data->all_multicast)
+ flags |= MLX5_CTRL_ALL_MULTICAST;
+ else
+ flags |= (MLX5_CTRL_BROADCAST |
+ MLX5_CTRL_IPV4_MULTICAST |
+ MLX5_CTRL_IPV6_MULTICAST);
+ flags |= MLX5_CTRL_DMAC;
+ }
if (priv->vlan_filter_n)
flags |= MLX5_CTRL_VLAN_FILTER;
return mlx5_flow_hw_ctrl_flows(dev, flags);
--
2.21.0
next prev parent reply other threads:[~2026-01-12 9:25 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-01-12 9:24 [PATCH 0/4] net/mlx5: future HW devargs defaults and fixes Maayan Kashani
2026-01-12 9:24 ` [PATCH 1/4] drivers: fix flow devarg handling for future HW Maayan Kashani
2026-01-12 18:17 ` Dariusz Sosnowski
2026-01-12 9:24 ` [PATCH 2/4] net/mlx5: fix default memzone requirements in HWS Maayan Kashani
2026-01-12 9:24 ` [PATCH 3/4] net/mlx5: fix internal HWS pattern template creation Maayan Kashani
2026-01-12 18:18 ` Dariusz Sosnowski
2026-01-12 9:24 ` Maayan Kashani [this message]
2026-01-12 18:19 ` [PATCH 4/4] net/mlx5: fix redundant control rules in promiscuous mode Dariusz Sosnowski
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260112092439.14843-5-mkashani@nvidia.com \
--to=mkashani@nvidia.com \
--cc=bingz@nvidia.com \
--cc=dev@dpdk.org \
--cc=dsosnowski@nvidia.com \
--cc=matan@nvidia.com \
--cc=orika@nvidia.com \
--cc=rasland@nvidia.com \
--cc=stable@dpdk.org \
--cc=suanmingm@nvidia.com \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).