From: Dariusz Sosnowski <dsosnowski@nvidia.com>
To: Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
Bing Zhao <bingz@nvidia.com>, Ori Kam <orika@nvidia.com>,
Suanming Mou <suanmingm@nvidia.com>,
Matan Azrad <matan@nvidia.com>
Cc: <dev@dpdk.org>
Subject: [PATCH v2 04/10] net/mlx5: support destroying unicast flow rules
Date: Tue, 22 Oct 2024 14:06:12 +0200 [thread overview]
Message-ID: <20241022120618.512091-5-dsosnowski@nvidia.com> (raw)
In-Reply-To: <20241022120618.512091-1-dsosnowski@nvidia.com>
This patch adds support for destroying:
- unicast DMAC control flow rules and
- unicast DMAC with VLAN control flow rules,
without affecting any other control flow rules,
when HWS flow engine is used.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
drivers/net/mlx5/mlx5_flow.h | 8 +++
drivers/net/mlx5/mlx5_flow_hw.c | 72 +++++++++++++++++++++++++++
drivers/net/mlx5/mlx5_flow_hw_stubs.c | 27 ++++++++++
3 files changed, 107 insertions(+)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 2ff0b25d4d..165d17e40a 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -2994,11 +2994,19 @@ int mlx5_flow_hw_ctrl_flows(struct rte_eth_dev *dev, uint32_t flags);
/** Create a control flow rule for matching unicast DMAC (HWS). */
int mlx5_flow_hw_ctrl_flow_dmac(struct rte_eth_dev *dev, const struct rte_ether_addr *addr);
+/** Destroy a control flow rule for matching unicast DMAC (HWS). */
+int mlx5_flow_hw_ctrl_flow_dmac_destroy(struct rte_eth_dev *dev, const struct rte_ether_addr *addr);
+
/** Create a control flow rule for matching unicast DMAC with VLAN (HWS). */
int mlx5_flow_hw_ctrl_flow_dmac_vlan(struct rte_eth_dev *dev,
const struct rte_ether_addr *addr,
const uint16_t vlan);
+/** Destroy a control flow rule for matching unicast DMAC with VLAN (HWS). */
+int mlx5_flow_hw_ctrl_flow_dmac_vlan_destroy(struct rte_eth_dev *dev,
+ const struct rte_ether_addr *addr,
+ const uint16_t vlan);
+
void mlx5_flow_hw_cleanup_ctrl_rx_templates(struct rte_eth_dev *dev);
int mlx5_flow_group_to_table(struct rte_eth_dev *dev,
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index d573cb5640..c017b64624 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -16209,6 +16209,41 @@ mlx5_flow_hw_ctrl_flow_dmac(struct rte_eth_dev *dev,
addr, 0);
}
+int
+mlx5_flow_hw_ctrl_flow_dmac_destroy(struct rte_eth_dev *dev,
+ const struct rte_ether_addr *addr)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_hw_ctrl_flow *entry;
+ struct mlx5_hw_ctrl_flow *tmp;
+ int ret;
+
+ /*
+ * HWS does not have automatic RSS flow expansion,
+ * so each variant of the control flow rule is a separate entry in the list.
+ * In that case, the whole list must be traversed.
+ */
+ entry = LIST_FIRST(&priv->hw_ctrl_flows);
+ while (entry != NULL) {
+ tmp = LIST_NEXT(entry, next);
+
+ if (entry->info.type != MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_RX_RSS_UNICAST_DMAC ||
+ !rte_is_same_ether_addr(addr, &entry->info.uc.dmac)) {
+ entry = tmp;
+ continue;
+ }
+
+ ret = flow_hw_destroy_ctrl_flow(dev, entry->flow);
+ LIST_REMOVE(entry, next);
+ mlx5_free(entry);
+ if (ret)
+ return ret;
+
+ entry = tmp;
+ }
+ return 0;
+}
+
int
mlx5_flow_hw_ctrl_flow_dmac_vlan(struct rte_eth_dev *dev,
const struct rte_ether_addr *addr,
@@ -16218,6 +16253,43 @@ mlx5_flow_hw_ctrl_flow_dmac_vlan(struct rte_eth_dev *dev,
addr, vlan);
}
+int
+mlx5_flow_hw_ctrl_flow_dmac_vlan_destroy(struct rte_eth_dev *dev,
+ const struct rte_ether_addr *addr,
+ const uint16_t vlan)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_hw_ctrl_flow *entry;
+ struct mlx5_hw_ctrl_flow *tmp;
+ int ret;
+
+ /*
+ * HWS does not have automatic RSS flow expansion,
+ * so each variant of the control flow rule is a separate entry in the list.
+ * In that case, the whole list must be traversed.
+ */
+ entry = LIST_FIRST(&priv->hw_ctrl_flows);
+ while (entry != NULL) {
+ tmp = LIST_NEXT(entry, next);
+
+ if (entry->info.type != MLX5_HW_CTRL_FLOW_TYPE_DEFAULT_RX_RSS_UNICAST_DMAC_VLAN ||
+ !rte_is_same_ether_addr(addr, &entry->info.uc.dmac) ||
+ vlan != entry->info.uc.vlan) {
+ entry = tmp;
+ continue;
+ }
+
+ ret = flow_hw_destroy_ctrl_flow(dev, entry->flow);
+ LIST_REMOVE(entry, next);
+ mlx5_free(entry);
+ if (ret)
+ return ret;
+
+ entry = tmp;
+ }
+ return 0;
+}
+
static __rte_always_inline uint32_t
mlx5_reformat_domain_to_tbl_type(const struct rte_flow_indir_action_conf *domain)
{
diff --git a/drivers/net/mlx5/mlx5_flow_hw_stubs.c b/drivers/net/mlx5/mlx5_flow_hw_stubs.c
index 985c046056..0e79e6c1f2 100644
--- a/drivers/net/mlx5/mlx5_flow_hw_stubs.c
+++ b/drivers/net/mlx5/mlx5_flow_hw_stubs.c
@@ -26,6 +26,19 @@ mlx5_flow_hw_ctrl_flow_dmac(struct rte_eth_dev *dev __rte_unused,
return -rte_errno;
}
+/*
+ * This is a stub for the real implementation of this function in mlx5_flow_hw.c in case:
+ * - PMD is compiled on Windows or
+ * - available rdma-core does not support HWS.
+ */
+__rte_weak int
+mlx5_flow_hw_ctrl_flow_dmac_destroy(struct rte_eth_dev *dev __rte_unused,
+ const struct rte_ether_addr *addr __rte_unused)
+{
+ rte_errno = ENOTSUP;
+ return -rte_errno;
+}
+
/*
* This is a stub for the real implementation of this function in mlx5_flow_hw.c in case:
* - PMD is compiled on Windows or
@@ -39,3 +52,17 @@ mlx5_flow_hw_ctrl_flow_dmac_vlan(struct rte_eth_dev *dev __rte_unused,
rte_errno = ENOTSUP;
return -rte_errno;
}
+
+/*
+ * This is a stub for the real implementation of this function in mlx5_flow_hw.c in case:
+ * - PMD is compiled on Windows or
+ * - available rdma-core does not support HWS.
+ */
+__rte_weak int
+mlx5_flow_hw_ctrl_flow_dmac_vlan_destroy(struct rte_eth_dev *dev __rte_unused,
+ const struct rte_ether_addr *addr __rte_unused,
+ const uint16_t vlan __rte_unused)
+{
+ rte_errno = ENOTSUP;
+ return -rte_errno;
+}
--
2.39.5
next prev parent reply other threads:[~2024-10-22 12:07 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-10-17 7:57 [PATCH 00/10] net/mlx5: improve MAC address and VLAN add latency Dariusz Sosnowski
2024-10-17 7:57 ` [PATCH 01/10] net/mlx5: track unicast DMAC control flow rules Dariusz Sosnowski
2024-10-17 7:57 ` [PATCH 02/10] net/mlx5: add checking if unicast flow rule exists Dariusz Sosnowski
2024-10-17 7:57 ` [PATCH 03/10] net/mlx5: rework creation of unicast flow rules Dariusz Sosnowski
2024-10-17 7:57 ` [PATCH 04/10] net/mlx5: support destroying " Dariusz Sosnowski
2024-10-17 7:57 ` [PATCH 05/10] net/mlx5: rename control flow rules types Dariusz Sosnowski
2024-10-17 7:57 ` [PATCH 06/10] net/mlx5: shared init of control flow rules Dariusz Sosnowski
2024-10-17 7:57 ` [PATCH 07/10] net/mlx5: add legacy unicast flow rules management Dariusz Sosnowski
2024-10-17 7:57 ` [PATCH 08/10] net/mlx5: add legacy unicast flow rule registration Dariusz Sosnowski
2024-10-17 7:57 ` [PATCH 09/10] net/mlx5: add dynamic unicast flow rule management Dariusz Sosnowski
2024-10-17 7:57 ` [PATCH 10/10] net/mlx5: optimize MAC address and VLAN filter handling Dariusz Sosnowski
2024-10-17 8:01 ` [PATCH 00/10] net/mlx5: improve MAC address and VLAN add latency Slava Ovsiienko
2024-10-22 12:06 ` [PATCH v2 " Dariusz Sosnowski
2024-10-22 12:06 ` [PATCH v2 01/10] net/mlx5: track unicast DMAC control flow rules Dariusz Sosnowski
2024-10-22 12:06 ` [PATCH v2 02/10] net/mlx5: add checking if unicast flow rule exists Dariusz Sosnowski
2024-10-22 12:06 ` [PATCH v2 03/10] net/mlx5: rework creation of unicast flow rules Dariusz Sosnowski
2024-10-22 12:06 ` Dariusz Sosnowski [this message]
2024-10-22 12:06 ` [PATCH v2 05/10] net/mlx5: rename control flow rules types Dariusz Sosnowski
2024-10-22 12:06 ` [PATCH v2 06/10] net/mlx5: shared init of control flow rules Dariusz Sosnowski
2024-10-22 12:06 ` [PATCH v2 07/10] net/mlx5: add legacy unicast flow rules management Dariusz Sosnowski
2024-10-22 12:06 ` [PATCH v2 08/10] net/mlx5: add legacy unicast flow rule registration Dariusz Sosnowski
2024-10-22 12:06 ` [PATCH v2 09/10] net/mlx5: add dynamic unicast flow rule management Dariusz Sosnowski
2024-10-22 12:06 ` [PATCH v2 10/10] net/mlx5: optimize MAC address and VLAN filter handling Dariusz Sosnowski
2024-10-22 15:41 ` [PATCH v2 00/10] net/mlx5: improve MAC address and VLAN add latency Stephen Hemminger
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20241022120618.512091-5-dsosnowski@nvidia.com \
--to=dsosnowski@nvidia.com \
--cc=bingz@nvidia.com \
--cc=dev@dpdk.org \
--cc=matan@nvidia.com \
--cc=orika@nvidia.com \
--cc=suanmingm@nvidia.com \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).