* [PATCH] net/mlx5: fix refcount on detached indirect action @ 2021-11-17 15:46 Dariusz Sosnowski 2021-11-22 14:17 ` [PATCH v2] " Dariusz Sosnowski 0 siblings, 1 reply; 6+ messages in thread From: Dariusz Sosnowski @ 2021-11-17 15:46 UTC (permalink / raw) To: Matan Azrad, Viacheslav Ovsiienko, Dmitry Kozlyuk; +Cc: dev This patch fixes segfault which was triggered when port, with indirect actions created, was closed. Segfault was occurring only when RTE_LIBRTE_MLX5_DEBUG was defined. It was caused by redundant decrement of RX queues refcount: - refcount was decremented when port was stopped and indirect actions were detached from RX queues (port stop), - refcount was decremented when indirect actions objects were destroyed (port close or destroying of indirect action). With this patch RX queues refcounts are decremented if and only if indirect actions object is detached or indirect actions object is destroyed. Fixes: ec4e11d41d12 ("net/mlx5: preserve indirect actions on restart") Cc: dkozlyuk@nvidia.com Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> --- drivers/net/mlx5/mlx5_rxq.c | 11 +++++++++-- 1 file changed, 9 insertions(+), 2 deletions(-) diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 52b95d7070..b7af60df38 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -2214,8 +2214,15 @@ mlx5_ind_table_obj_release(struct rte_eth_dev *dev, if (ret) return 1; priv->obj_ops.ind_table_destroy(ind_tbl); - for (i = 0; i != ind_tbl->queues_n; ++i) - claim_nonzero(mlx5_rxq_deref(dev, ind_tbl->queues[i])); + /* + * Refcounts on RX queues are decremented if and only if indirection + * table was attached to RX queues. It will not be the case after + * calling mlx5_dev_stop. + */ + if (priv->dev_data->dev_started) { + for (i = 0; i != ind_tbl->queues_n; ++i) + claim_nonzero(mlx5_rxq_deref(dev, ind_tbl->queues[i])); + } mlx5_free(ind_tbl); return 0; } -- 2.25.1 ^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v2] net/mlx5: fix refcount on detached indirect action 2021-11-17 15:46 [PATCH] net/mlx5: fix refcount on detached indirect action Dariusz Sosnowski @ 2021-11-22 14:17 ` Dariusz Sosnowski 2021-11-22 17:09 ` [PATCH v3] " Dariusz Sosnowski 0 siblings, 1 reply; 6+ messages in thread From: Dariusz Sosnowski @ 2021-11-22 14:17 UTC (permalink / raw) To: Matan Azrad, Viacheslav Ovsiienko, Dmitry Kozlyuk Cc: Raslan Darawsheh, dev, stable This patch fixes segfault which was triggered when port, with indirect actions created, was closed. Segfault was occurring only when RTE_LIBRTE_MLX5_DEBUG was defined. It was caused by redundant decrement of RX queues refcount: - refcount was decremented when port was stopped and indirect actions were detached from RX queues (port stop), - refcount was decremented when indirect actions objects were destroyed (port close or destroying of indirect action). This patch fixes behavior. Dereferencing RX queues is done if and only if indirect action is explicitly destroyed by the user or detached on port stop. Dereferencing RX queues on action destroy operation depends on an argument to the wrapper of indirect action destroy operation, introduced in this patch. Fixes: ec4e11d41d12 ("net/mlx5: preserve indirect actions on restart") Cc: dkozlyuk@nvidia.com Cc: stable@dpdk.org Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- drivers/net/mlx5/mlx5_flow.c | 43 +++++++++++++++++++++------------ drivers/net/mlx5/mlx5_flow.h | 1 + drivers/net/mlx5/mlx5_flow_dv.c | 15 +++++++++--- drivers/net/mlx5/mlx5_rx.h | 3 ++- drivers/net/mlx5/mlx5_rxq.c | 21 ++++++++++------ 5 files changed, 56 insertions(+), 27 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 43598f92ee..fb4ec2d8da 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -8644,6 +8644,27 @@ flow_drv_action_validate(struct rte_eth_dev *dev, return fops->action_validate(dev, conf, action, error); } +/* Wrapper for driver action_destroy op callback */ +static int +flow_drv_action_destroy(struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + bool deref_qs, + struct rte_flow_error *error) +{ + static const char err_msg[] = "indirect action destruction unsupported"; + struct rte_flow_attr attr = { .transfer = 0 }; + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(flow_get_drv_type(dev, &attr)); + + if (!fops->action_destroy) { + DRV_LOG(ERR, "port %u %s.", dev->data->port_id, err_msg); + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, err_msg); + return -rte_errno; + } + return fops->action_destroy(dev, handle, deref_qs, error); +} + /** * Destroys the shared action by handle. * @@ -8665,21 +8686,10 @@ mlx5_action_handle_destroy(struct rte_eth_dev *dev, struct rte_flow_action_handle *handle, struct rte_flow_error *error) { - static const char err_msg[] = "indirect action destruction unsupported"; - struct rte_flow_attr attr = { .transfer = 0 }; - const struct mlx5_flow_driver_ops *fops = - flow_get_drv_ops(flow_get_drv_type(dev, &attr)); - - if (!fops->action_destroy) { - DRV_LOG(ERR, "port %u %s.", dev->data->port_id, err_msg); - rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, - NULL, err_msg); - return -rte_errno; - } - return fops->action_destroy(dev, handle, error); + return flow_drv_action_destroy(dev, handle, true, error); } -/* Wrapper for driver action_destroy op callback */ +/* Wrapper for driver action_update op callback */ static int flow_drv_action_update(struct rte_eth_dev *dev, struct rte_flow_action_handle *handle, @@ -8698,7 +8708,7 @@ flow_drv_action_update(struct rte_eth_dev *dev, return fops->action_update(dev, handle, update, error); } -/* Wrapper for driver action_destroy op callback */ +/* Wrapper for driver action_query op callback */ static int flow_drv_action_query(struct rte_eth_dev *dev, const struct rte_flow_action_handle *handle, @@ -8850,8 +8860,9 @@ mlx5_action_handle_flush(struct rte_eth_dev *dev) ILIST_FOREACH(priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS], priv->rss_shared_actions, idx, shared_rss, next) { - ret |= mlx5_action_handle_destroy(dev, - (struct rte_flow_action_handle *)(uintptr_t)idx, &error); + ret |= flow_drv_action_destroy(dev, + (struct rte_flow_action_handle *)(uintptr_t)idx, false, + &error); } return ret; } diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 1de2f2edb0..71ee513951 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1199,6 +1199,7 @@ typedef struct rte_flow_action_handle *(*mlx5_flow_action_create_t) typedef int (*mlx5_flow_action_destroy_t) (struct rte_eth_dev *dev, struct rte_flow_action_handle *action, + bool deref_qs, struct rte_flow_error *error); typedef int (*mlx5_flow_action_update_t) (struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 2f03e59f9c..b0d59b94ca 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -14732,7 +14732,7 @@ __flow_dv_action_rss_setup(struct rte_eth_dev *dev, error_hrxq_new: err = rte_errno; __flow_dv_action_rss_hrxqs_release(dev, shared_rss); - if (!mlx5_ind_table_obj_release(dev, shared_rss->ind_tbl, true)) + if (!mlx5_ind_table_obj_release(dev, shared_rss->ind_tbl, true, true)) shared_rss->ind_tbl = NULL; rte_errno = err; return -rte_errno; @@ -14839,6 +14839,9 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev, * Pointer to the Ethernet device structure. * @param[in] idx * The shared RSS action object ID to be removed. + * @param[in] deref_rxqs + * If true, then dereference any RX queues related to shared RSS action. + * Otherwise, no additional action will be taken. * @param[out] error * Perform verbose error reporting if not NULL. Initialized in case of * error only. @@ -14848,6 +14851,7 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev, */ static int __flow_dv_action_rss_release(struct rte_eth_dev *dev, uint32_t idx, + bool deref_rxqs, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; @@ -14875,7 +14879,8 @@ __flow_dv_action_rss_release(struct rte_eth_dev *dev, uint32_t idx, NULL, "shared rss hrxq has references"); queue = shared_rss->ind_tbl->queues; - remaining = mlx5_ind_table_obj_release(dev, shared_rss->ind_tbl, true); + remaining = mlx5_ind_table_obj_release(dev, shared_rss->ind_tbl, true, + deref_rxqs); if (remaining) return rte_flow_error_set(error, EBUSY, RTE_FLOW_ERROR_TYPE_ACTION, @@ -14977,6 +14982,9 @@ flow_dv_action_create(struct rte_eth_dev *dev, * @param[out] error * Perform verbose error reporting if not NULL. Initialized in case of * error only. + * @param[in] deref_qs + * If true, then dereference any queues related to the shared action object. + * Otherwise, no additional action will be taken. * * @return * 0 on success, otherwise negative errno value. @@ -14984,6 +14992,7 @@ flow_dv_action_create(struct rte_eth_dev *dev, static int flow_dv_action_destroy(struct rte_eth_dev *dev, struct rte_flow_action_handle *handle, + bool deref_qs, struct rte_flow_error *error) { uint32_t act_idx = (uint32_t)(uintptr_t)handle; @@ -14995,7 +15004,7 @@ flow_dv_action_destroy(struct rte_eth_dev *dev, switch (type) { case MLX5_INDIRECT_ACTION_TYPE_RSS: - return __flow_dv_action_rss_release(dev, idx, error); + return __flow_dv_action_rss_release(dev, idx, deref_qs, error); case MLX5_INDIRECT_ACTION_TYPE_COUNT: cnt = flow_dv_counter_get_by_idx(dev, idx, NULL); if (!__atomic_compare_exchange_n(&cnt->shared_info.refcnt, diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 9cc1a2703b..b19464bb37 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -225,7 +225,8 @@ struct mlx5_ind_table_obj *mlx5_ind_table_obj_get(struct rte_eth_dev *dev, uint32_t queues_n); int mlx5_ind_table_obj_release(struct rte_eth_dev *dev, struct mlx5_ind_table_obj *ind_tbl, - bool standalone); + bool standalone, + bool deref_rxqs); int mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, struct mlx5_ind_table_obj *ind_tbl); int mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 480f4f9f07..1f6ddbab8b 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -2195,6 +2195,9 @@ mlx5_ind_table_obj_get(struct rte_eth_dev *dev, const uint16_t *queues, * Indirection table to release. * @param standalone * Indirection table for Standalone queue. + * @param deref_rxqs + * If true, then dereference RX queues related to indirection table. + * Otherwise, no additional action will be taken. * * @return * 1 while a reference on it exists, 0 when freed. @@ -2202,7 +2205,8 @@ mlx5_ind_table_obj_get(struct rte_eth_dev *dev, const uint16_t *queues, int mlx5_ind_table_obj_release(struct rte_eth_dev *dev, struct mlx5_ind_table_obj *ind_tbl, - bool standalone) + bool standalone, + bool deref_rxqs) { struct mlx5_priv *priv = dev->data->dev_private; unsigned int i, ret; @@ -2215,8 +2219,10 @@ mlx5_ind_table_obj_release(struct rte_eth_dev *dev, if (ret) return 1; priv->obj_ops.ind_table_destroy(ind_tbl); - for (i = 0; i != ind_tbl->queues_n; ++i) - claim_nonzero(mlx5_rxq_deref(dev, ind_tbl->queues[i])); + if (deref_rxqs) { + for (i = 0; i != ind_tbl->queues_n; ++i) + claim_nonzero(mlx5_rxq_deref(dev, ind_tbl->queues[i])); + } mlx5_free(ind_tbl); return 0; } @@ -2573,7 +2579,7 @@ mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hrxq_idx, if (ind_tbl != hrxq->ind_table) { MLX5_ASSERT(!hrxq->standalone); mlx5_ind_table_obj_release(dev, hrxq->ind_table, - hrxq->standalone); + hrxq->standalone, true); hrxq->ind_table = ind_tbl; } hrxq->hash_fields = hash_fields; @@ -2583,7 +2589,8 @@ mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hrxq_idx, err = rte_errno; if (ind_tbl != hrxq->ind_table) { MLX5_ASSERT(!hrxq->standalone); - mlx5_ind_table_obj_release(dev, ind_tbl, hrxq->standalone); + mlx5_ind_table_obj_release(dev, ind_tbl, hrxq->standalone, + true); } rte_errno = err; return -rte_errno; @@ -2600,7 +2607,7 @@ __mlx5_hrxq_remove(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq) priv->obj_ops.hrxq_destroy(hrxq); if (!hrxq->standalone) { mlx5_ind_table_obj_release(dev, hrxq->ind_table, - hrxq->standalone); + hrxq->standalone, true); } mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq->idx); } @@ -2666,7 +2673,7 @@ __mlx5_hrxq_create(struct rte_eth_dev *dev, return hrxq; error: if (!rss_desc->ind_tbl) - mlx5_ind_table_obj_release(dev, ind_tbl, standalone); + mlx5_ind_table_obj_release(dev, ind_tbl, standalone, true); if (hrxq) mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx); return NULL; -- 2.25.1 ^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v3] net/mlx5: fix refcount on detached indirect action 2021-11-22 14:17 ` [PATCH v2] " Dariusz Sosnowski @ 2021-11-22 17:09 ` Dariusz Sosnowski 2021-11-23 13:26 ` Slava Ovsiienko 2021-11-23 15:38 ` [PATCH v4] " Dariusz Sosnowski 0 siblings, 2 replies; 6+ messages in thread From: Dariusz Sosnowski @ 2021-11-22 17:09 UTC (permalink / raw) To: Matan Azrad, Viacheslav Ovsiienko, Dmitry Kozlyuk Cc: Raslan Darawsheh, dev, stable This patch fixes segfault which was triggered when port, with indirect actions created, was closed. Segfault was occurring only when RTE_LIBRTE_MLX5_DEBUG was defined. It was caused by redundant decrement of RX queues refcount: - refcount was decremented when port was stopped and indirect actions were detached from RX queues (port stop), - refcount was decremented when indirect actions objects were destroyed (port close or destroying of indirect action). This patch fixes behavior. Dereferencing RX queues is done if and only if indirect action is explicitly destroyed by the user or detached on port stop. Dereferencing RX queues on action destroy operation depends on an argument to the wrapper of indirect action destroy operation, introduced in this patch. Fixes: ec4e11d41d12 ("net/mlx5: preserve indirect actions on restart") Cc: dkozlyuk@nvidia.com Cc: stable@dpdk.org Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> --- drivers/net/mlx5/mlx5_flow.c | 42 ++++++++++++++++++++++++--------- drivers/net/mlx5/mlx5_flow.h | 1 + drivers/net/mlx5/mlx5_flow_dv.c | 15 +++++++++--- drivers/net/mlx5/mlx5_rx.h | 3 ++- drivers/net/mlx5/mlx5_rxq.c | 21 +++++++++++------ 5 files changed, 60 insertions(+), 22 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index 43598f92ee..d6cf19a6cd 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -8644,6 +8644,25 @@ flow_drv_action_validate(struct rte_eth_dev *dev, return fops->action_validate(dev, conf, action, error); } +/* Wrapper for driver action_destroy op callback */ +static int +flow_drv_action_destroy(struct rte_eth_dev *dev, + struct rte_flow_action_handle *handle, + bool deref_qs, + const struct mlx5_flow_driver_ops *fops, + struct rte_flow_error *error) +{ + static const char err_msg[] = "indirect action destruction unsupported"; + + if (!fops->action_destroy) { + DRV_LOG(ERR, "port %u %s.", dev->data->port_id, err_msg); + rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, + NULL, err_msg); + return -rte_errno; + } + return fops->action_destroy(dev, handle, deref_qs, error); +} + /** * Destroys the shared action by handle. * @@ -8665,21 +8684,18 @@ mlx5_action_handle_destroy(struct rte_eth_dev *dev, struct rte_flow_action_handle *handle, struct rte_flow_error *error) { - static const char err_msg[] = "indirect action destruction unsupported"; struct rte_flow_attr attr = { .transfer = 0 }; const struct mlx5_flow_driver_ops *fops = flow_get_drv_ops(flow_get_drv_type(dev, &attr)); - if (!fops->action_destroy) { - DRV_LOG(ERR, "port %u %s.", dev->data->port_id, err_msg); - rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION, - NULL, err_msg); - return -rte_errno; + if (dev->data->dev_started) { + return flow_drv_action_destroy(dev, handle, true, fops, error); + } else { + return flow_drv_action_destroy(dev, handle, false, fops, error); } - return fops->action_destroy(dev, handle, error); } -/* Wrapper for driver action_destroy op callback */ +/* Wrapper for driver action_update op callback */ static int flow_drv_action_update(struct rte_eth_dev *dev, struct rte_flow_action_handle *handle, @@ -8698,7 +8714,7 @@ flow_drv_action_update(struct rte_eth_dev *dev, return fops->action_update(dev, handle, update, error); } -/* Wrapper for driver action_destroy op callback */ +/* Wrapper for driver action_query op callback */ static int flow_drv_action_query(struct rte_eth_dev *dev, const struct rte_flow_action_handle *handle, @@ -8845,13 +8861,17 @@ mlx5_action_handle_flush(struct rte_eth_dev *dev) struct rte_flow_error error; struct mlx5_priv *priv = dev->data->dev_private; struct mlx5_shared_action_rss *shared_rss; + struct rte_flow_attr attr = { .transfer = 0 }; + const struct mlx5_flow_driver_ops *fops = + flow_get_drv_ops(flow_get_drv_type(dev, &attr)); int ret = 0; uint32_t idx; ILIST_FOREACH(priv->sh->ipool[MLX5_IPOOL_RSS_SHARED_ACTIONS], priv->rss_shared_actions, idx, shared_rss, next) { - ret |= mlx5_action_handle_destroy(dev, - (struct rte_flow_action_handle *)(uintptr_t)idx, &error); + ret |= flow_drv_action_destroy(dev, + (struct rte_flow_action_handle *)(uintptr_t)idx, false, + fops, &error); } return ret; } diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h index 1de2f2edb0..71ee513951 100644 --- a/drivers/net/mlx5/mlx5_flow.h +++ b/drivers/net/mlx5/mlx5_flow.h @@ -1199,6 +1199,7 @@ typedef struct rte_flow_action_handle *(*mlx5_flow_action_create_t) typedef int (*mlx5_flow_action_destroy_t) (struct rte_eth_dev *dev, struct rte_flow_action_handle *action, + bool deref_qs, struct rte_flow_error *error); typedef int (*mlx5_flow_action_update_t) (struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 2f03e59f9c..b0d59b94ca 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -14732,7 +14732,7 @@ __flow_dv_action_rss_setup(struct rte_eth_dev *dev, error_hrxq_new: err = rte_errno; __flow_dv_action_rss_hrxqs_release(dev, shared_rss); - if (!mlx5_ind_table_obj_release(dev, shared_rss->ind_tbl, true)) + if (!mlx5_ind_table_obj_release(dev, shared_rss->ind_tbl, true, true)) shared_rss->ind_tbl = NULL; rte_errno = err; return -rte_errno; @@ -14839,6 +14839,9 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev, * Pointer to the Ethernet device structure. * @param[in] idx * The shared RSS action object ID to be removed. + * @param[in] deref_rxqs + * If true, then dereference any RX queues related to shared RSS action. + * Otherwise, no additional action will be taken. * @param[out] error * Perform verbose error reporting if not NULL. Initialized in case of * error only. @@ -14848,6 +14851,7 @@ __flow_dv_action_rss_create(struct rte_eth_dev *dev, */ static int __flow_dv_action_rss_release(struct rte_eth_dev *dev, uint32_t idx, + bool deref_rxqs, struct rte_flow_error *error) { struct mlx5_priv *priv = dev->data->dev_private; @@ -14875,7 +14879,8 @@ __flow_dv_action_rss_release(struct rte_eth_dev *dev, uint32_t idx, NULL, "shared rss hrxq has references"); queue = shared_rss->ind_tbl->queues; - remaining = mlx5_ind_table_obj_release(dev, shared_rss->ind_tbl, true); + remaining = mlx5_ind_table_obj_release(dev, shared_rss->ind_tbl, true, + deref_rxqs); if (remaining) return rte_flow_error_set(error, EBUSY, RTE_FLOW_ERROR_TYPE_ACTION, @@ -14977,6 +14982,9 @@ flow_dv_action_create(struct rte_eth_dev *dev, * @param[out] error * Perform verbose error reporting if not NULL. Initialized in case of * error only. + * @param[in] deref_qs + * If true, then dereference any queues related to the shared action object. + * Otherwise, no additional action will be taken. * * @return * 0 on success, otherwise negative errno value. @@ -14984,6 +14992,7 @@ flow_dv_action_create(struct rte_eth_dev *dev, static int flow_dv_action_destroy(struct rte_eth_dev *dev, struct rte_flow_action_handle *handle, + bool deref_qs, struct rte_flow_error *error) { uint32_t act_idx = (uint32_t)(uintptr_t)handle; @@ -14995,7 +15004,7 @@ flow_dv_action_destroy(struct rte_eth_dev *dev, switch (type) { case MLX5_INDIRECT_ACTION_TYPE_RSS: - return __flow_dv_action_rss_release(dev, idx, error); + return __flow_dv_action_rss_release(dev, idx, deref_qs, error); case MLX5_INDIRECT_ACTION_TYPE_COUNT: cnt = flow_dv_counter_get_by_idx(dev, idx, NULL); if (!__atomic_compare_exchange_n(&cnt->shared_info.refcnt, diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 9cc1a2703b..b19464bb37 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -225,7 +225,8 @@ struct mlx5_ind_table_obj *mlx5_ind_table_obj_get(struct rte_eth_dev *dev, uint32_t queues_n); int mlx5_ind_table_obj_release(struct rte_eth_dev *dev, struct mlx5_ind_table_obj *ind_tbl, - bool standalone); + bool standalone, + bool deref_rxqs); int mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, struct mlx5_ind_table_obj *ind_tbl); int mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 480f4f9f07..1f6ddbab8b 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -2195,6 +2195,9 @@ mlx5_ind_table_obj_get(struct rte_eth_dev *dev, const uint16_t *queues, * Indirection table to release. * @param standalone * Indirection table for Standalone queue. + * @param deref_rxqs + * If true, then dereference RX queues related to indirection table. + * Otherwise, no additional action will be taken. * * @return * 1 while a reference on it exists, 0 when freed. @@ -2202,7 +2205,8 @@ mlx5_ind_table_obj_get(struct rte_eth_dev *dev, const uint16_t *queues, int mlx5_ind_table_obj_release(struct rte_eth_dev *dev, struct mlx5_ind_table_obj *ind_tbl, - bool standalone) + bool standalone, + bool deref_rxqs) { struct mlx5_priv *priv = dev->data->dev_private; unsigned int i, ret; @@ -2215,8 +2219,10 @@ mlx5_ind_table_obj_release(struct rte_eth_dev *dev, if (ret) return 1; priv->obj_ops.ind_table_destroy(ind_tbl); - for (i = 0; i != ind_tbl->queues_n; ++i) - claim_nonzero(mlx5_rxq_deref(dev, ind_tbl->queues[i])); + if (deref_rxqs) { + for (i = 0; i != ind_tbl->queues_n; ++i) + claim_nonzero(mlx5_rxq_deref(dev, ind_tbl->queues[i])); + } mlx5_free(ind_tbl); return 0; } @@ -2573,7 +2579,7 @@ mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hrxq_idx, if (ind_tbl != hrxq->ind_table) { MLX5_ASSERT(!hrxq->standalone); mlx5_ind_table_obj_release(dev, hrxq->ind_table, - hrxq->standalone); + hrxq->standalone, true); hrxq->ind_table = ind_tbl; } hrxq->hash_fields = hash_fields; @@ -2583,7 +2589,8 @@ mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hrxq_idx, err = rte_errno; if (ind_tbl != hrxq->ind_table) { MLX5_ASSERT(!hrxq->standalone); - mlx5_ind_table_obj_release(dev, ind_tbl, hrxq->standalone); + mlx5_ind_table_obj_release(dev, ind_tbl, hrxq->standalone, + true); } rte_errno = err; return -rte_errno; @@ -2600,7 +2607,7 @@ __mlx5_hrxq_remove(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq) priv->obj_ops.hrxq_destroy(hrxq); if (!hrxq->standalone) { mlx5_ind_table_obj_release(dev, hrxq->ind_table, - hrxq->standalone); + hrxq->standalone, true); } mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq->idx); } @@ -2666,7 +2673,7 @@ __mlx5_hrxq_create(struct rte_eth_dev *dev, return hrxq; error: if (!rss_desc->ind_tbl) - mlx5_ind_table_obj_release(dev, ind_tbl, standalone); + mlx5_ind_table_obj_release(dev, ind_tbl, standalone, true); if (hrxq) mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx); return NULL; -- 2.25.1 ^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: [PATCH v3] net/mlx5: fix refcount on detached indirect action 2021-11-22 17:09 ` [PATCH v3] " Dariusz Sosnowski @ 2021-11-23 13:26 ` Slava Ovsiienko 2021-11-23 15:38 ` [PATCH v4] " Dariusz Sosnowski 1 sibling, 0 replies; 6+ messages in thread From: Slava Ovsiienko @ 2021-11-23 13:26 UTC (permalink / raw) To: Dariusz Sosnowski, Matan Azrad, Dmitry Kozlyuk Cc: Raslan Darawsheh, dev, stable > -----Original Message----- > From: Dariusz Sosnowski <dsosnowski@nvidia.com> > Sent: Monday, November 22, 2021 19:10 > To: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; Dmitry Kozlyuk <dkozlyuk@nvidia.com> > Cc: Raslan Darawsheh <rasland@nvidia.com>; dev@dpdk.org; > stable@dpdk.org > Subject: [PATCH v3] net/mlx5: fix refcount on detached indirect action > > This patch fixes segfault which was triggered when port, with indirect actions > created, was closed. Segfault was occurring only when > RTE_LIBRTE_MLX5_DEBUG was defined. It was caused by redundant > decrement of RX queues refcount: > > - refcount was decremented when port was stopped and indirect actions were > detached from RX queues (port stop), > - refcount was decremented when indirect actions objects were destroyed > (port close or destroying of indirect action). > > This patch fixes behavior. Dereferencing RX queues is done if and only if > indirect action is explicitly destroyed by the user or detached on port stop. > Dereferencing RX queues on action destroy operation depends on an > argument to the wrapper of indirect action destroy operation, introduced in > this patch. > > Fixes: ec4e11d41d12 ("net/mlx5: preserve indirect actions on restart") > Cc: dkozlyuk@nvidia.com > Cc: stable@dpdk.org > > Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> ^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH v4] net/mlx5: fix refcount on detached indirect action 2021-11-22 17:09 ` [PATCH v3] " Dariusz Sosnowski 2021-11-23 13:26 ` Slava Ovsiienko @ 2021-11-23 15:38 ` Dariusz Sosnowski 2021-11-23 20:35 ` Raslan Darawsheh 1 sibling, 1 reply; 6+ messages in thread From: Dariusz Sosnowski @ 2021-11-23 15:38 UTC (permalink / raw) To: Matan Azrad, Viacheslav Ovsiienko, Dmitry Kozlyuk Cc: dev, Raslan Darawsheh, stable This patch fixes segfault which was triggered when port, with indirect actions created, was closed. Segfault was occurring only when RTE_LIBRTE_MLX5_DEBUG was defined. It was caused by redundant decrement of RX queues refcount: - refcount was decremented when port was stopped and indirect actions were detached from RX queues (port stop), - refcount was decremented when indirect actions objects were destroyed (port close or destroying of indirect action). This patch fixes behavior. Dereferencing RX queues is done if and only if indirect action is explicitly destroyed by the user or detached on port stop. Dereferencing RX queues on action destroy operation depends on an argument to the wrapper of indirect action destroy operation, introduced in this patch. Fixes: ec4e11d41d12 ("net/mlx5: preserve indirect actions on restart") Cc: dkozlyuk@nvidia.com Cc: stable@dpdk.org Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com> --- v4: * Simplify dev_started checking. * Remove redundant passes of deref_rxqs argument. v3: * Fix handling action destroy in between port start and stop. * Revert moving contents of mlx5_action_handle_destroy v2: * Introduce wrapper over action action destroy operation. * Fix typos in commit message. drivers/net/mlx5/mlx5_flow_dv.c | 5 +++-- drivers/net/mlx5/mlx5_rx.h | 3 ++- drivers/net/mlx5/mlx5_rxq.c | 21 ++++++++++++++------- 3 files changed, 19 insertions(+), 10 deletions(-) diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 2f03e59f9c..1d46fa48b0 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -14732,7 +14732,7 @@ __flow_dv_action_rss_setup(struct rte_eth_dev *dev, error_hrxq_new: err = rte_errno; __flow_dv_action_rss_hrxqs_release(dev, shared_rss); - if (!mlx5_ind_table_obj_release(dev, shared_rss->ind_tbl, true)) + if (!mlx5_ind_table_obj_release(dev, shared_rss->ind_tbl, true, true)) shared_rss->ind_tbl = NULL; rte_errno = err; return -rte_errno; @@ -14875,7 +14875,8 @@ __flow_dv_action_rss_release(struct rte_eth_dev *dev, uint32_t idx, NULL, "shared rss hrxq has references"); queue = shared_rss->ind_tbl->queues; - remaining = mlx5_ind_table_obj_release(dev, shared_rss->ind_tbl, true); + remaining = mlx5_ind_table_obj_release(dev, shared_rss->ind_tbl, true, + !!dev->data->dev_started); if (remaining) return rte_flow_error_set(error, EBUSY, RTE_FLOW_ERROR_TYPE_ACTION, diff --git a/drivers/net/mlx5/mlx5_rx.h b/drivers/net/mlx5/mlx5_rx.h index 9cc1a2703b..b19464bb37 100644 --- a/drivers/net/mlx5/mlx5_rx.h +++ b/drivers/net/mlx5/mlx5_rx.h @@ -225,7 +225,8 @@ struct mlx5_ind_table_obj *mlx5_ind_table_obj_get(struct rte_eth_dev *dev, uint32_t queues_n); int mlx5_ind_table_obj_release(struct rte_eth_dev *dev, struct mlx5_ind_table_obj *ind_tbl, - bool standalone); + bool standalone, + bool deref_rxqs); int mlx5_ind_table_obj_setup(struct rte_eth_dev *dev, struct mlx5_ind_table_obj *ind_tbl); int mlx5_ind_table_obj_modify(struct rte_eth_dev *dev, diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c index 480f4f9f07..1f6ddbab8b 100644 --- a/drivers/net/mlx5/mlx5_rxq.c +++ b/drivers/net/mlx5/mlx5_rxq.c @@ -2195,6 +2195,9 @@ mlx5_ind_table_obj_get(struct rte_eth_dev *dev, const uint16_t *queues, * Indirection table to release. * @param standalone * Indirection table for Standalone queue. + * @param deref_rxqs + * If true, then dereference RX queues related to indirection table. + * Otherwise, no additional action will be taken. * * @return * 1 while a reference on it exists, 0 when freed. @@ -2202,7 +2205,8 @@ mlx5_ind_table_obj_get(struct rte_eth_dev *dev, const uint16_t *queues, int mlx5_ind_table_obj_release(struct rte_eth_dev *dev, struct mlx5_ind_table_obj *ind_tbl, - bool standalone) + bool standalone, + bool deref_rxqs) { struct mlx5_priv *priv = dev->data->dev_private; unsigned int i, ret; @@ -2215,8 +2219,10 @@ mlx5_ind_table_obj_release(struct rte_eth_dev *dev, if (ret) return 1; priv->obj_ops.ind_table_destroy(ind_tbl); - for (i = 0; i != ind_tbl->queues_n; ++i) - claim_nonzero(mlx5_rxq_deref(dev, ind_tbl->queues[i])); + if (deref_rxqs) { + for (i = 0; i != ind_tbl->queues_n; ++i) + claim_nonzero(mlx5_rxq_deref(dev, ind_tbl->queues[i])); + } mlx5_free(ind_tbl); return 0; } @@ -2573,7 +2579,7 @@ mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hrxq_idx, if (ind_tbl != hrxq->ind_table) { MLX5_ASSERT(!hrxq->standalone); mlx5_ind_table_obj_release(dev, hrxq->ind_table, - hrxq->standalone); + hrxq->standalone, true); hrxq->ind_table = ind_tbl; } hrxq->hash_fields = hash_fields; @@ -2583,7 +2589,8 @@ mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hrxq_idx, err = rte_errno; if (ind_tbl != hrxq->ind_table) { MLX5_ASSERT(!hrxq->standalone); - mlx5_ind_table_obj_release(dev, ind_tbl, hrxq->standalone); + mlx5_ind_table_obj_release(dev, ind_tbl, hrxq->standalone, + true); } rte_errno = err; return -rte_errno; @@ -2600,7 +2607,7 @@ __mlx5_hrxq_remove(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq) priv->obj_ops.hrxq_destroy(hrxq); if (!hrxq->standalone) { mlx5_ind_table_obj_release(dev, hrxq->ind_table, - hrxq->standalone); + hrxq->standalone, true); } mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq->idx); } @@ -2666,7 +2673,7 @@ __mlx5_hrxq_create(struct rte_eth_dev *dev, return hrxq; error: if (!rss_desc->ind_tbl) - mlx5_ind_table_obj_release(dev, ind_tbl, standalone); + mlx5_ind_table_obj_release(dev, ind_tbl, standalone, true); if (hrxq) mlx5_ipool_free(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx); return NULL; -- 2.25.1 ^ permalink raw reply [flat|nested] 6+ messages in thread
* RE: [PATCH v4] net/mlx5: fix refcount on detached indirect action 2021-11-23 15:38 ` [PATCH v4] " Dariusz Sosnowski @ 2021-11-23 20:35 ` Raslan Darawsheh 0 siblings, 0 replies; 6+ messages in thread From: Raslan Darawsheh @ 2021-11-23 20:35 UTC (permalink / raw) To: Dariusz Sosnowski, Matan Azrad, Slava Ovsiienko, Dmitry Kozlyuk Cc: dev, stable Hi, > -----Original Message----- > From: Dariusz Sosnowski <dsosnowski@nvidia.com> > Sent: Tuesday, November 23, 2021 5:38 PM > To: Matan Azrad <matan@nvidia.com>; Slava Ovsiienko > <viacheslavo@nvidia.com>; Dmitry Kozlyuk <dkozlyuk@nvidia.com> > Cc: dev@dpdk.org; Raslan Darawsheh <rasland@nvidia.com>; > stable@dpdk.org > Subject: [PATCH v4] net/mlx5: fix refcount on detached indirect action > > This patch fixes segfault which was triggered when port, with indirect actions > created, was closed. Segfault was occurring only when > RTE_LIBRTE_MLX5_DEBUG was defined. It was caused by redundant > decrement of RX queues refcount: > > - refcount was decremented when port was stopped and indirect actions > were detached from RX queues (port stop), > - refcount was decremented when indirect actions objects were destroyed > (port close or destroying of indirect action). > > This patch fixes behavior. Dereferencing RX queues is done if and only if > indirect action is explicitly destroyed by the user or detached on port stop. > Dereferencing RX queues on action destroy operation depends on an > argument to the wrapper of indirect action destroy operation, introduced in > this patch. > > Fixes: ec4e11d41d12 ("net/mlx5: preserve indirect actions on restart") > Cc: dkozlyuk@nvidia.com > Cc: stable@dpdk.org > > Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> > Acked-by: Matan Azrad <matan@nvidia.com> > --- > v4: > * Simplify dev_started checking. > * Remove redundant passes of deref_rxqs argument. > > v3: > * Fix handling action destroy in between port start and stop. > * Revert moving contents of mlx5_action_handle_destroy > > v2: > * Introduce wrapper over action action destroy operation. > * Fix typos in commit message. > Patch applied to next-net-mlx, Kindest regards, Raslan Darawsheh ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2021-11-23 20:35 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2021-11-17 15:46 [PATCH] net/mlx5: fix refcount on detached indirect action Dariusz Sosnowski 2021-11-22 14:17 ` [PATCH v2] " Dariusz Sosnowski 2021-11-22 17:09 ` [PATCH v3] " Dariusz Sosnowski 2021-11-23 13:26 ` Slava Ovsiienko 2021-11-23 15:38 ` [PATCH v4] " Dariusz Sosnowski 2021-11-23 20:35 ` Raslan Darawsheh
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).