From: Maayan Kashani <mkashani@nvidia.com>
To: <dev@dpdk.org>
Cc: <mkashani@nvidia.com>, <dsosnowski@nvidia.com>,
<rasland@nvidia.com>, <stable@dpdk.org>,
Bing Zhao <bingz@nvidia.com>,
Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
Ori Kam <orika@nvidia.com>, Suanming Mou <suanmingm@nvidia.com>,
Matan Azrad <matan@nvidia.com>
Subject: [PATCH 2/3] net/mlx5: fix crash in non template metadata split
Date: Tue, 28 Jan 2025 09:54:04 +0200 [thread overview]
Message-ID: <20250128075406.175330-3-mkashani@nvidia.com> (raw)
In-Reply-To: <20250128075406.175330-1-mkashani@nvidia.com>
For switch dev mode, there is a rule split in case of using mark action.
First rule will have the mark action, tag it with rule ID number
and jump to the second rule that matches the tag and perform
the rest of the actions (like RSS or queue).
First, fix the crash when accessing RSS queue[0] instead of
queue index, same as for queue action handling for
hairpin RX queue check.
Second, set tag action is not supported in HWS,
so, replaced set tag action with modify field action.
Used same tag index for both action and matching.
Fixes: 821a6a5cc495 ("net/mlx5: add metadata split for compatibility")
Cc: stable@dpdk.org
Signed-off-by: Maayan Kashani <mkashani@nvidia.com>
Acked-by: Bing Zhao <bingz@nvidia.com>
---
drivers/net/mlx5/mlx5_nta_split.c | 28 +++++++++++++++-------------
1 file changed, 15 insertions(+), 13 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_nta_split.c b/drivers/net/mlx5/mlx5_nta_split.c
index b26f305bcab..6a85ab7fd12 100644
--- a/drivers/net/mlx5/mlx5_nta_split.c
+++ b/drivers/net/mlx5/mlx5_nta_split.c
@@ -13,6 +13,8 @@
#ifdef HAVE_MLX5_HWS_SUPPORT
+#define BITS_PER_BYTE 8
+
/*
* Generate new actions lists for prefix and suffix flows.
*
@@ -44,11 +46,10 @@ mlx5_flow_nta_split_qrss_actions_prep(struct rte_eth_dev *dev,
struct rte_flow_error *error)
{
struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_rte_flow_action_set_tag *set_tag;
+ struct rte_flow_action_modify_field *set_tag;
struct rte_flow_action_jump *jump;
const int qrss_idx = qrss - actions;
uint32_t flow_id = 0;
- int ret = 0;
/* Allocate the new subflow ID and used to be matched later. */
mlx5_ipool_malloc(priv->sh->ipool[MLX5_IPOOL_RSS_EXPANTION_FLOW_ID], &flow_id);
@@ -67,16 +68,16 @@ mlx5_flow_nta_split_qrss_actions_prep(struct rte_eth_dev *dev,
/* Count MLX5_RTE_FLOW_ACTION_TYPE_TAG. */
actions_n++;
set_tag = (void *)(prefix_act + actions_n);
- /* Reuse ASO reg, should always succeed. Consider to use REG_C_6. */
- ret = flow_hw_get_reg_id_by_domain(dev, RTE_FLOW_ITEM_TYPE_METER_COLOR,
- MLX5DR_TABLE_TYPE_NIC_RX, 0);
- MLX5_ASSERT(ret != (int)REG_NON);
- set_tag->id = (enum modify_reg)ret;
/* Internal SET_TAG action to set flow ID. */
- set_tag->data = flow_id;
+ set_tag->operation = RTE_FLOW_MODIFY_SET;
+ set_tag->width = sizeof(flow_id) * BITS_PER_BYTE;
+ set_tag->src.field = RTE_FLOW_FIELD_VALUE;
+ memcpy(&set_tag->src.value, &flow_id, sizeof(flow_id));
+ set_tag->dst.field = RTE_FLOW_FIELD_TAG;
+ set_tag->dst.tag_index = RTE_PMD_MLX5_LINEAR_HASH_TAG_INDEX;
/* Construct new actions array and replace QUEUE/RSS action. */
prefix_act[qrss_idx] = (struct rte_flow_action) {
- .type = (enum rte_flow_action_type)MLX5_RTE_FLOW_ACTION_TYPE_TAG,
+ .type = RTE_FLOW_ACTION_TYPE_MODIFY_FIELD,
.conf = set_tag,
};
/* JUMP action to jump to mreg copy table (CP_TBL). */
@@ -132,8 +133,9 @@ mlx5_flow_nta_split_qrss_items_prep(struct rte_eth_dev *dev,
split_items[1].type = RTE_FLOW_ITEM_TYPE_END;
q_tag_spec->data = qrss_id;
q_tag_spec->id = (enum modify_reg)
- flow_hw_get_reg_id_by_domain(dev, RTE_FLOW_ITEM_TYPE_METER_COLOR,
- MLX5DR_TABLE_TYPE_NIC_RX, 0);
+ flow_hw_get_reg_id_by_domain(dev, RTE_FLOW_ITEM_TYPE_TAG,
+ MLX5DR_TABLE_TYPE_NIC_RX,
+ RTE_PMD_MLX5_LINEAR_HASH_TAG_INDEX);
MLX5_ASSERT(q_tag_spec->id != REG_NON);
}
@@ -211,12 +213,12 @@ mlx5_flow_nta_split_metadata(struct rte_eth_dev *dev,
return 0;
} else if (action_flags & MLX5_FLOW_ACTION_RSS) {
rss = (const struct rte_flow_action_rss *)actions->conf;
- if (mlx5_rxq_is_hairpin(dev, rss->queue[0]))
+ if (mlx5_rxq_is_hairpin(dev, rss->queue_num))
return 0;
}
/* The prefix and suffix flows' actions. */
pefx_act_size = sizeof(struct rte_flow_action) * (actions_n + 1) +
- sizeof(struct rte_flow_action_set_tag) +
+ sizeof(struct rte_flow_action_modify_field) +
sizeof(struct rte_flow_action_jump);
sfx_act_size = sizeof(struct rte_flow_action) * 2;
/* The suffix attribute. */
--
2.21.0
next prev parent reply other threads:[~2025-01-28 7:54 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-01-28 7:54 [PATCH 0/3] Non template to HWS fixes Maayan Kashani
2025-01-28 7:54 ` [PATCH 1/3] net/mlx5: fix limitation of actions per rule Maayan Kashani
2025-01-28 7:54 ` Maayan Kashani [this message]
2025-01-28 7:54 ` [PATCH 3/3] net/mlx5: fix flow flush for non-template flows Maayan Kashani
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20250128075406.175330-3-mkashani@nvidia.com \
--to=mkashani@nvidia.com \
--cc=bingz@nvidia.com \
--cc=dev@dpdk.org \
--cc=dsosnowski@nvidia.com \
--cc=matan@nvidia.com \
--cc=orika@nvidia.com \
--cc=rasland@nvidia.com \
--cc=stable@dpdk.org \
--cc=suanmingm@nvidia.com \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).