From: Dariusz Sosnowski <dsosnowski@nvidia.com>
To: Viacheslav Ovsiienko <viacheslavo@nvidia.com>,
Ori Kam <orika@nvidia.com>, Suanming Mou <suanmingm@nvidia.com>,
Matan Azrad <matan@nvidia.com>
Cc: <dev@dpdk.org>, Raslan Darawsheh <rasland@nvidia.com>,
Bing Zhao <bingz@nvidia.com>
Subject: [PATCH v2 04/11] net/mlx5: skip the unneeded resource index allocation
Date: Thu, 29 Feb 2024 12:51:49 +0100 [thread overview]
Message-ID: <20240229115157.201671-5-dsosnowski@nvidia.com> (raw)
In-Reply-To: <20240229115157.201671-1-dsosnowski@nvidia.com>
From: Bing Zhao <bingz@nvidia.com>
The resource index was introduced to decouple the flow rule and its
resources used by hardware steering. This is needed only when a rule
update is supported.
In some cases, the update is not supported on a table(matcher). E.g.:
* Table is resizable
* FW gets involved
* Root table
* Not index based or optimized (not applicable)
Or only one STE entry is required per rule. When doing an update, the
operation is always atomic. There is no need for the extra resource
index either.
If the matcher doesn't support rule update or the maximal entry is
only 1 for this matcher, there is no need to manage the resource
index allocation and free from the pool.
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_hw.c | 129 +++++++++++++++++++-------------
1 file changed, 76 insertions(+), 53 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_hw.c b/drivers/net/mlx5/mlx5_flow_hw.c
index ef91a23a9b..1fe8f42618 100644
--- a/drivers/net/mlx5/mlx5_flow_hw.c
+++ b/drivers/net/mlx5/mlx5_flow_hw.c
@@ -3383,9 +3383,6 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
flow = mlx5_ipool_zmalloc(table->flow, &flow_idx);
if (!flow)
goto error;
- mlx5_ipool_malloc(table->resource, &res_idx);
- if (!res_idx)
- goto error;
rule_acts = flow_hw_get_dr_action_buffer(priv, table, action_template_index, queue);
/*
* Set the table here in order to know the destination table
@@ -3394,7 +3391,14 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
flow->table = table;
flow->mt_idx = pattern_template_index;
flow->idx = flow_idx;
- flow->res_idx = res_idx;
+ if (table->resource) {
+ mlx5_ipool_malloc(table->resource, &res_idx);
+ if (!res_idx)
+ goto error;
+ flow->res_idx = res_idx;
+ } else {
+ flow->res_idx = flow_idx;
+ }
/*
* Set the job type here in order to know if the flow memory
* should be freed or not when get the result from dequeue.
@@ -3404,11 +3408,10 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
job->user_data = user_data;
rule_attr.user_data = job;
/*
- * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices for rule
- * insertion hints.
+ * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices
+ * for rule insertion hints.
*/
- MLX5_ASSERT(res_idx > 0);
- flow->rule_idx = res_idx - 1;
+ flow->rule_idx = flow->res_idx - 1;
rule_attr.rule_idx = flow->rule_idx;
/*
* Construct the flow actions based on the input actions.
@@ -3451,12 +3454,12 @@ flow_hw_async_flow_create(struct rte_eth_dev *dev,
if (likely(!ret))
return (struct rte_flow *)flow;
error:
- if (job)
- flow_hw_job_put(priv, job, queue);
+ if (table->resource && res_idx)
+ mlx5_ipool_free(table->resource, res_idx);
if (flow_idx)
mlx5_ipool_free(table->flow, flow_idx);
- if (res_idx)
- mlx5_ipool_free(table->resource, res_idx);
+ if (job)
+ flow_hw_job_put(priv, job, queue);
rte_flow_error_set(error, rte_errno,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
"fail to create rte flow");
@@ -3527,9 +3530,6 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
flow = mlx5_ipool_zmalloc(table->flow, &flow_idx);
if (!flow)
goto error;
- mlx5_ipool_malloc(table->resource, &res_idx);
- if (!res_idx)
- goto error;
rule_acts = flow_hw_get_dr_action_buffer(priv, table, action_template_index, queue);
/*
* Set the table here in order to know the destination table
@@ -3538,7 +3538,14 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
flow->table = table;
flow->mt_idx = 0;
flow->idx = flow_idx;
- flow->res_idx = res_idx;
+ if (table->resource) {
+ mlx5_ipool_malloc(table->resource, &res_idx);
+ if (!res_idx)
+ goto error;
+ flow->res_idx = res_idx;
+ } else {
+ flow->res_idx = flow_idx;
+ }
/*
* Set the job type here in order to know if the flow memory
* should be freed or not when get the result from dequeue.
@@ -3547,9 +3554,7 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
job->flow = flow;
job->user_data = user_data;
rule_attr.user_data = job;
- /*
- * Set the rule index.
- */
+ /* Set the rule index. */
flow->rule_idx = rule_index;
rule_attr.rule_idx = flow->rule_idx;
/*
@@ -3585,12 +3590,12 @@ flow_hw_async_flow_create_by_index(struct rte_eth_dev *dev,
if (likely(!ret))
return (struct rte_flow *)flow;
error:
- if (job)
- flow_hw_job_put(priv, job, queue);
- if (res_idx)
+ if (table->resource && res_idx)
mlx5_ipool_free(table->resource, res_idx);
if (flow_idx)
mlx5_ipool_free(table->flow, flow_idx);
+ if (job)
+ flow_hw_job_put(priv, job, queue);
rte_flow_error_set(error, rte_errno,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
"fail to create rte flow");
@@ -3653,9 +3658,6 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev,
rte_errno = ENOMEM;
goto error;
}
- mlx5_ipool_malloc(table->resource, &res_idx);
- if (!res_idx)
- goto error;
nf = job->upd_flow;
memset(nf, 0, sizeof(struct rte_flow_hw));
rule_acts = flow_hw_get_dr_action_buffer(priv, table, action_template_index, queue);
@@ -3666,7 +3668,14 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev,
nf->table = table;
nf->mt_idx = of->mt_idx;
nf->idx = of->idx;
- nf->res_idx = res_idx;
+ if (table->resource) {
+ mlx5_ipool_malloc(table->resource, &res_idx);
+ if (!res_idx)
+ goto error;
+ nf->res_idx = res_idx;
+ } else {
+ nf->res_idx = of->res_idx;
+ }
/*
* Set the job type here in order to know if the flow memory
* should be freed or not when get the result from dequeue.
@@ -3676,11 +3685,11 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev,
job->user_data = user_data;
rule_attr.user_data = job;
/*
- * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices for rule
- * insertion hints.
+ * Indexed pool returns 1-based indices, but mlx5dr expects 0-based indices
+ * for rule insertion hints.
+ * If there is only one STE, the update will be atomic by nature.
*/
- MLX5_ASSERT(res_idx > 0);
- nf->rule_idx = res_idx - 1;
+ nf->rule_idx = nf->res_idx - 1;
rule_attr.rule_idx = nf->rule_idx;
/*
* Construct the flow actions based on the input actions.
@@ -3706,14 +3715,14 @@ flow_hw_async_flow_update(struct rte_eth_dev *dev,
if (likely(!ret))
return 0;
error:
+ if (table->resource && res_idx)
+ mlx5_ipool_free(table->resource, res_idx);
/* Flow created fail, return the descriptor and flow memory. */
if (job)
flow_hw_job_put(priv, job, queue);
- if (res_idx)
- mlx5_ipool_free(table->resource, res_idx);
return rte_flow_error_set(error, rte_errno,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
- "fail to update rte flow");
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "fail to update rte flow");
}
/**
@@ -3968,13 +3977,15 @@ hw_cmpl_flow_update_or_destroy(struct rte_eth_dev *dev,
}
if (job->type != MLX5_HW_Q_JOB_TYPE_UPDATE) {
if (table) {
- mlx5_ipool_free(table->resource, res_idx);
+ if (table->resource)
+ mlx5_ipool_free(table->resource, res_idx);
mlx5_ipool_free(table->flow, flow->idx);
}
} else {
rte_memcpy(flow, job->upd_flow,
offsetof(struct rte_flow_hw, rule));
- mlx5_ipool_free(table->resource, res_idx);
+ if (table->resource)
+ mlx5_ipool_free(table->resource, res_idx);
}
}
@@ -4474,6 +4485,7 @@ flow_hw_table_create(struct rte_eth_dev *dev,
uint32_t i = 0, max_tpl = MLX5_HW_TBL_MAX_ITEM_TEMPLATE;
uint32_t nb_flows = rte_align32pow2(attr->nb_flows);
bool port_started = !!dev->data->dev_started;
+ bool rpool_needed;
size_t tbl_mem_size;
int err;
@@ -4511,13 +4523,6 @@ flow_hw_table_create(struct rte_eth_dev *dev,
tbl->flow = mlx5_ipool_create(&cfg);
if (!tbl->flow)
goto error;
- /* Allocate rule indexed pool. */
- cfg.size = 0;
- cfg.type = "mlx5_hw_table_rule";
- cfg.max_idx += priv->hw_q[0].size;
- tbl->resource = mlx5_ipool_create(&cfg);
- if (!tbl->resource)
- goto error;
/* Register the flow group. */
ge = mlx5_hlist_register(priv->sh->groups, attr->flow_attr.group, &ctx);
if (!ge)
@@ -4597,12 +4602,30 @@ flow_hw_table_create(struct rte_eth_dev *dev,
tbl->type = attr->flow_attr.transfer ? MLX5DR_TABLE_TYPE_FDB :
(attr->flow_attr.egress ? MLX5DR_TABLE_TYPE_NIC_TX :
MLX5DR_TABLE_TYPE_NIC_RX);
+ /*
+ * Only the matcher supports update and needs more than 1 WQE, an additional
+ * index is needed. Or else the flow index can be reused.
+ */
+ rpool_needed = mlx5dr_matcher_is_updatable(tbl->matcher_info[0].matcher) &&
+ mlx5dr_matcher_is_dependent(tbl->matcher_info[0].matcher);
+ if (rpool_needed) {
+ /* Allocate rule indexed pool. */
+ cfg.size = 0;
+ cfg.type = "mlx5_hw_table_rule";
+ cfg.max_idx += priv->hw_q[0].size;
+ tbl->resource = mlx5_ipool_create(&cfg);
+ if (!tbl->resource)
+ goto res_error;
+ }
if (port_started)
LIST_INSERT_HEAD(&priv->flow_hw_tbl, tbl, next);
else
LIST_INSERT_HEAD(&priv->flow_hw_tbl_ongo, tbl, next);
rte_rwlock_init(&tbl->matcher_replace_rwlk);
return tbl;
+res_error:
+ if (tbl->matcher_info[0].matcher)
+ (void)mlx5dr_matcher_destroy(tbl->matcher_info[0].matcher);
at_error:
for (i = 0; i < nb_action_templates; i++) {
__flow_hw_action_template_destroy(dev, &tbl->ats[i].acts);
@@ -4620,8 +4643,6 @@ flow_hw_table_create(struct rte_eth_dev *dev,
if (tbl->grp)
mlx5_hlist_unregister(priv->sh->groups,
&tbl->grp->entry);
- if (tbl->resource)
- mlx5_ipool_destroy(tbl->resource);
if (tbl->flow)
mlx5_ipool_destroy(tbl->flow);
mlx5_free(tbl);
@@ -4830,12 +4851,13 @@ flow_hw_table_destroy(struct rte_eth_dev *dev,
uint32_t ridx = 1;
/* Build ipool allocated object bitmap. */
- mlx5_ipool_flush_cache(table->resource);
+ if (table->resource)
+ mlx5_ipool_flush_cache(table->resource);
mlx5_ipool_flush_cache(table->flow);
/* Check if ipool has allocated objects. */
if (table->refcnt ||
mlx5_ipool_get_next(table->flow, &fidx) ||
- mlx5_ipool_get_next(table->resource, &ridx)) {
+ (table->resource && mlx5_ipool_get_next(table->resource, &ridx))) {
DRV_LOG(WARNING, "Table %p is still in use.", (void *)table);
return rte_flow_error_set(error, EBUSY,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
@@ -4857,7 +4879,8 @@ flow_hw_table_destroy(struct rte_eth_dev *dev,
if (table->matcher_info[1].matcher)
mlx5dr_matcher_destroy(table->matcher_info[1].matcher);
mlx5_hlist_unregister(priv->sh->groups, &table->grp->entry);
- mlx5_ipool_destroy(table->resource);
+ if (table->resource)
+ mlx5_ipool_destroy(table->resource);
mlx5_ipool_destroy(table->flow);
mlx5_free(table);
return 0;
@@ -12476,11 +12499,11 @@ flow_hw_table_resize(struct rte_eth_dev *dev,
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
table, "cannot resize flows pool");
- ret = mlx5_ipool_resize(table->resource, nb_flows);
- if (ret)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
- table, "cannot resize resources pool");
+ /*
+ * A resizable matcher doesn't support rule update. In this case, the ipool
+ * for the resource is not created and there is no need to resize it.
+ */
+ MLX5_ASSERT(!table->resource);
if (mlx5_is_multi_pattern_active(&table->mpctx)) {
ret = flow_hw_table_resize_multi_pattern_actions(dev, table, nb_flows, error);
if (ret < 0)
--
2.39.2
next prev parent reply other threads:[~2024-02-29 11:52 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-02-28 17:00 [PATCH 00/11] net/mlx5: flow insertion performance improvements Dariusz Sosnowski
2024-02-28 17:00 ` [PATCH 01/11] net/mlx5: allocate local DR rule action buffers Dariusz Sosnowski
2024-02-28 17:00 ` [PATCH 02/11] net/mlx5/hws: add check for matcher rule update support Dariusz Sosnowski
2024-02-28 17:00 ` [PATCH 03/11] net/mlx5/hws: add check if matcher contains complex rules Dariusz Sosnowski
2024-02-28 17:00 ` [PATCH 04/11] net/mlx5: skip the unneeded resource index allocation Dariusz Sosnowski
2024-02-28 17:00 ` [PATCH 05/11] net/mlx5: remove action params from job Dariusz Sosnowski
2024-02-28 17:00 ` [PATCH 06/11] net/mlx5: remove flow pattern " Dariusz Sosnowski
2024-02-28 17:00 ` [PATCH 07/11] net/mlx5: remove updated flow " Dariusz Sosnowski
2024-02-28 17:00 ` [PATCH 08/11] net/mlx5: use flow as operation container Dariusz Sosnowski
2024-02-28 17:00 ` [PATCH 09/11] net/mlx5: move rarely used flow fields outside Dariusz Sosnowski
2024-02-28 17:00 ` [PATCH 10/11] net/mlx5: reuse flow fields Dariusz Sosnowski
2024-02-28 17:00 ` [PATCH 11/11] net/mlx5: remove unneeded device status checking Dariusz Sosnowski
2024-02-29 8:52 ` [PATCH 00/11] net/mlx5: flow insertion performance improvements Ori Kam
2024-02-29 11:51 ` [PATCH v2 " Dariusz Sosnowski
2024-02-29 11:51 ` [PATCH v2 01/11] net/mlx5: allocate local DR rule action buffers Dariusz Sosnowski
2024-02-29 11:51 ` [PATCH v2 02/11] net/mlx5/hws: add check for matcher rule update support Dariusz Sosnowski
2024-02-29 11:51 ` [PATCH v2 03/11] net/mlx5/hws: add check if matcher contains complex rules Dariusz Sosnowski
2024-02-29 11:51 ` Dariusz Sosnowski [this message]
2024-02-29 11:51 ` [PATCH v2 05/11] net/mlx5: remove action params from job Dariusz Sosnowski
2024-02-29 11:51 ` [PATCH v2 06/11] net/mlx5: remove flow pattern " Dariusz Sosnowski
2024-02-29 11:51 ` [PATCH v2 07/11] net/mlx5: remove updated flow " Dariusz Sosnowski
2024-02-29 11:51 ` [PATCH v2 08/11] net/mlx5: use flow as operation container Dariusz Sosnowski
2024-02-29 11:51 ` [PATCH v2 09/11] net/mlx5: move rarely used flow fields outside Dariusz Sosnowski
2024-02-29 11:51 ` [PATCH v2 10/11] net/mlx5: reuse flow fields Dariusz Sosnowski
2024-02-29 11:51 ` [PATCH v2 11/11] net/mlx5: remove unneeded device status checking Dariusz Sosnowski
2024-03-03 12:16 ` [PATCH v2 00/11] net/mlx5: flow insertion performance improvements Raslan Darawsheh
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20240229115157.201671-5-dsosnowski@nvidia.com \
--to=dsosnowski@nvidia.com \
--cc=bingz@nvidia.com \
--cc=dev@dpdk.org \
--cc=matan@nvidia.com \
--cc=orika@nvidia.com \
--cc=rasland@nvidia.com \
--cc=suanmingm@nvidia.com \
--cc=viacheslavo@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).