* [dpdk-dev] [PATCH 0/4] Shared action RSS PMD impl
@ 2020-10-08 12:18 Andrey Vesnovaty
2020-10-08 12:18 ` [dpdk-dev] [PATCH 1/4] common/mlx5: modify advanced Rx object via DevX Andrey Vesnovaty
` (4 more replies)
0 siblings, 5 replies; 19+ messages in thread
From: Andrey Vesnovaty @ 2020-10-08 12:18 UTC (permalink / raw)
To: dev
Cc: jer, jerinjacobk, thomas, ferruh.yigit, stephen,
bruce.richardson, orika, viacheslavo, andrey.vesnovaty, mdr,
nhorman, ajit.khaparde, samik.gupta
This patchset introduces Mellanox PMD implementation for shared RSS
action. In was part of the 'RTE flow shared action API' patchset [1]
but after v3 i was split to RTE flow layer [2] and PMD implementation
(this patchset).
PMD implementation of this patcheset based on RTE flow API [3].
Current state of the patchset is draft & v2 will be sent very soon.
[1] RTE flow shared action API v1
http://inbox.dpdk.org/dev/20200702120511.16315-1-andreyv@mellanox.com/
[2] RTE flow shared action API v4
http://inbox.dpdk.org/dev/20201006200835.30017-1-andreyv@nvidia.com/
[3] RTE flow shared action API v7
http://inbox.dpdk.org/dev/20201008115143.13208-1-andreyv@nvidia.com/
Andrey Vesnovaty (4):
common/mlx5: modify advanced Rx object via DevX
net/mlx5: modify hash Rx queue objects
net/mlx5: shared action PMD
net/mlx5: driver support for shared action
drivers/common/mlx5/mlx5_devx_cmds.c | 84 +++
drivers/common/mlx5/mlx5_devx_cmds.h | 10 +
drivers/common/mlx5/mlx5_prm.h | 29 +
.../common/mlx5/rte_common_mlx5_version.map | 1 +
drivers/net/mlx5/mlx5.c | 1 +
drivers/net/mlx5/mlx5.h | 6 +
drivers/net/mlx5/mlx5_defs.h | 3 +
drivers/net/mlx5/mlx5_devx.c | 173 ++++-
drivers/net/mlx5/mlx5_flow.c | 497 ++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 86 +++
drivers/net/mlx5/mlx5_flow_dv.c | 684 +++++++++++++++++-
drivers/net/mlx5/mlx5_rxq.c | 103 +++
drivers/net/mlx5/mlx5_rxtx.h | 5 +-
13 files changed, 1588 insertions(+), 94 deletions(-)
--
2.26.2
^ permalink raw reply [flat|nested] 19+ messages in thread
* [dpdk-dev] [PATCH 1/4] common/mlx5: modify advanced Rx object via DevX
2020-10-08 12:18 [dpdk-dev] [PATCH 0/4] Shared action RSS PMD impl Andrey Vesnovaty
@ 2020-10-08 12:18 ` Andrey Vesnovaty
2020-10-08 12:18 ` [dpdk-dev] [PATCH 2/4] net/mlx5: modify hash Rx queue objects Andrey Vesnovaty
` (3 subsequent siblings)
4 siblings, 0 replies; 19+ messages in thread
From: Andrey Vesnovaty @ 2020-10-08 12:18 UTC (permalink / raw)
To: dev
Cc: jer, jerinjacobk, thomas, ferruh.yigit, stephen,
bruce.richardson, orika, viacheslavo, andrey.vesnovaty, mdr,
nhorman, ajit.khaparde, samik.gupta, Andrey Vesnovaty,
Matan Azrad, Shahaf Shuler
From: Andrey Vesnovaty <andreyv@mellanox.com>
Implement mlx5_devx_cmd_modify_tir() to modify TIR object using DevX
API.
Add related structs in mlx5_prm.h.
Signed-off-by: Andrey Vesnovaty <andreyv@mellanox.com>
---
drivers/common/mlx5/mlx5_devx_cmds.c | 84 +++++++++++++++++++
drivers/common/mlx5/mlx5_devx_cmds.h | 10 +++
drivers/common/mlx5/mlx5_prm.h | 29 +++++++
.../common/mlx5/rte_common_mlx5_version.map | 1 +
4 files changed, 124 insertions(+)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 7c81ae15a9..2b109c4f65 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -1080,6 +1080,90 @@ mlx5_devx_cmd_create_tir(void *ctx,
return tir;
}
+/**
+ * Modify TIR using DevX API.
+ *
+ * @param[in] tir
+ * Pointer to TIR DevX object structure.
+ * @param [in] modify_tir_attr
+ * Pointer to TIR modification attributes structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_devx_cmd_modify_tir(struct mlx5_devx_obj *tir,
+ struct mlx5_devx_modify_tir_attr *modify_tir_attr)
+{
+ struct mlx5_devx_tir_attr *tir_attr = &modify_tir_attr->tir;
+ uint32_t in[MLX5_ST_SZ_DW(modify_tir_in)] = {0};
+ uint32_t out[MLX5_ST_SZ_DW(modify_tir_out)] = {0};
+ void *tir_ctx;
+ int ret;
+
+ MLX5_SET(modify_tir_in, in, opcode, MLX5_CMD_OP_MODIFY_TIR);
+ MLX5_SET(modify_tir_in, in, tirn, modify_tir_attr->tirn);
+ MLX5_SET64(modify_tir_in, in, modify_bitmask,
+ modify_tir_attr->modify_bitmask);
+
+ tir_ctx = MLX5_ADDR_OF(modify_rq_in, in, ctx);
+ if (modify_tir_attr->modify_bitmask &
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_LRO) {
+ MLX5_SET(tirc, tir_ctx, lro_timeout_period_usecs,
+ tir_attr->lro_timeout_period_usecs);
+ MLX5_SET(tirc, tir_ctx, lro_enable_mask,
+ tir_attr->lro_enable_mask);
+ MLX5_SET(tirc, tir_ctx, lro_max_msg_sz,
+ tir_attr->lro_max_msg_sz);
+ }
+ if (modify_tir_attr->modify_bitmask &
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_INDIRECT_TABLE)
+ MLX5_SET(tirc, tir_ctx, indirect_table,
+ tir_attr->indirect_table);
+ if (modify_tir_attr->modify_bitmask &
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_HASH) {
+ int i;
+ void *outer, *inner;
+ MLX5_SET(tirc, tir_ctx, rx_hash_symmetric,
+ tir_attr->rx_hash_symmetric);
+ MLX5_SET(tirc, tir_ctx, rx_hash_fn, tir_attr->rx_hash_fn);
+ for (i = 0; i < 10; i++) {
+ MLX5_SET(tirc, tir_ctx, rx_hash_toeplitz_key[i],
+ tir_attr->rx_hash_toeplitz_key[i]);
+ }
+ outer = MLX5_ADDR_OF(tirc, tir_ctx,
+ rx_hash_field_selector_outer);
+ MLX5_SET(rx_hash_field_select, outer, l3_prot_type,
+ tir_attr->rx_hash_field_selector_outer.l3_prot_type);
+ MLX5_SET(rx_hash_field_select, outer, l4_prot_type,
+ tir_attr->rx_hash_field_selector_outer.l4_prot_type);
+ MLX5_SET
+ (rx_hash_field_select, outer, selected_fields,
+ tir_attr->rx_hash_field_selector_outer.selected_fields);
+ inner = MLX5_ADDR_OF(tirc, tir_ctx,
+ rx_hash_field_selector_inner);
+ MLX5_SET(rx_hash_field_select, inner, l3_prot_type,
+ tir_attr->rx_hash_field_selector_inner.l3_prot_type);
+ MLX5_SET(rx_hash_field_select, inner, l4_prot_type,
+ tir_attr->rx_hash_field_selector_inner.l4_prot_type);
+ MLX5_SET
+ (rx_hash_field_select, inner, selected_fields,
+ tir_attr->rx_hash_field_selector_inner.selected_fields);
+ }
+ if (modify_tir_attr->modify_bitmask &
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_SELF_LB_EN) {
+ MLX5_SET(tirc, tir_ctx, self_lb_block, tir_attr->self_lb_block);
+ }
+ ret = mlx5_glue->devx_obj_modify(tir->obj, in, sizeof(in),
+ out, sizeof(out));
+ if (ret) {
+ DRV_LOG(ERR, "Failed to modify TIR using DevX");
+ rte_errno = errno;
+ return -errno;
+ }
+ return ret;
+}
+
/**
* Create RQT using DevX API.
*
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index 1c84cea851..ba6cb6ed51 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -190,6 +190,13 @@ struct mlx5_devx_tir_attr {
struct mlx5_rx_hash_field_select rx_hash_field_selector_inner;
};
+/* TIR attributes structure, used by TIR modify */
+struct mlx5_devx_modify_tir_attr {
+ uint32_t tirn:24;
+ uint64_t modify_bitmask;
+ struct mlx5_devx_tir_attr tir;
+};
+
/* RQT attributes structure, used by RQT operations. */
struct mlx5_devx_rqt_attr {
uint8_t rq_type;
@@ -434,6 +441,9 @@ __rte_internal
int mlx5_devx_cmd_modify_rqt(struct mlx5_devx_obj *rqt,
struct mlx5_devx_rqt_attr *rqt_attr);
__rte_internal
+int mlx5_devx_cmd_modify_tir(struct mlx5_devx_obj *tir,
+ struct mlx5_devx_modify_tir_attr *tir_attr);
+__rte_internal
int mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
uint32_t ids[], uint32_t num);
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 20f2fccd4f..2dbae445b3 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -830,6 +830,7 @@ enum {
MLX5_CMD_OP_ACCESS_REGISTER = 0x805,
MLX5_CMD_OP_ALLOC_TRANSPORT_DOMAIN = 0x816,
MLX5_CMD_OP_CREATE_TIR = 0x900,
+ MLX5_CMD_OP_MODIFY_TIR = 0x901,
MLX5_CMD_OP_CREATE_SQ = 0X904,
MLX5_CMD_OP_MODIFY_SQ = 0X905,
MLX5_CMD_OP_CREATE_RQ = 0x908,
@@ -1858,6 +1859,34 @@ struct mlx5_ifc_create_tir_in_bits {
struct mlx5_ifc_tirc_bits ctx;
};
+enum {
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_LRO = 1ULL << 0,
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_INDIRECT_TABLE = 1ULL << 1,
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_HASH = 1ULL << 2,
+ /* bit 3 - tunneled_offload_en modify not supported */
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_SELF_LB_EN = 1ULL << 4,
+};
+
+struct mlx5_ifc_modify_tir_out_bits {
+ u8 status[0x8];
+ u8 reserved_at_8[0x18];
+ u8 syndrome[0x20];
+ u8 reserved_at_40[0x40];
+};
+
+struct mlx5_ifc_modify_tir_in_bits {
+ u8 opcode[0x10];
+ u8 uid[0x10];
+ u8 reserved_at_20[0x10];
+ u8 op_mod[0x10];
+ u8 reserved_at_40[0x8];
+ u8 tirn[0x18];
+ u8 reserved_at_60[0x20];
+ u8 modify_bitmask[0x40];
+ u8 reserved_at_c0[0x40];
+ struct mlx5_ifc_tirc_bits ctx;
+};
+
enum {
MLX5_INLINE_Q_TYPE_RQ = 0x0,
MLX5_INLINE_Q_TYPE_VIRTQ = 0x1,
diff --git a/drivers/common/mlx5/rte_common_mlx5_version.map b/drivers/common/mlx5/rte_common_mlx5_version.map
index c4d57c08a7..884001ca7d 100644
--- a/drivers/common/mlx5/rte_common_mlx5_version.map
+++ b/drivers/common/mlx5/rte_common_mlx5_version.map
@@ -30,6 +30,7 @@ INTERNAL {
mlx5_devx_cmd_modify_rq;
mlx5_devx_cmd_modify_rqt;
mlx5_devx_cmd_modify_sq;
+ mlx5_devx_cmd_modify_tir;
mlx5_devx_cmd_modify_virtq;
mlx5_devx_cmd_qp_query_tis_td;
mlx5_devx_cmd_query_hca_attr;
--
2.26.2
^ permalink raw reply [flat|nested] 19+ messages in thread
* [dpdk-dev] [PATCH 2/4] net/mlx5: modify hash Rx queue objects
2020-10-08 12:18 [dpdk-dev] [PATCH 0/4] Shared action RSS PMD impl Andrey Vesnovaty
2020-10-08 12:18 ` [dpdk-dev] [PATCH 1/4] common/mlx5: modify advanced Rx object via DevX Andrey Vesnovaty
@ 2020-10-08 12:18 ` Andrey Vesnovaty
2020-10-08 12:18 ` [dpdk-dev] [PATCH 3/4] net/mlx5: shared action PMD Andrey Vesnovaty
` (2 subsequent siblings)
4 siblings, 0 replies; 19+ messages in thread
From: Andrey Vesnovaty @ 2020-10-08 12:18 UTC (permalink / raw)
To: dev
Cc: jer, jerinjacobk, thomas, ferruh.yigit, stephen,
bruce.richardson, orika, viacheslavo, andrey.vesnovaty, mdr,
nhorman, ajit.khaparde, samik.gupta, Andrey Vesnovaty,
Matan Azrad, Shahaf Shuler
From: Andrey Vesnovaty <andreyv@mellanox.com>
Implement mlx5_hrxq_modify() to modify hash RX queue object.
This commit relays on capability to modify TIR object via DevX.
Signed-off-by: Andrey Vesnovaty <andreyv@mellanox.com>
---
drivers/net/mlx5/mlx5.h | 4 +
drivers/net/mlx5/mlx5_devx.c | 173 +++++++++++++++++++++++++++--------
drivers/net/mlx5/mlx5_rxq.c | 103 +++++++++++++++++++++
drivers/net/mlx5/mlx5_rxtx.h | 5 +-
4 files changed, 246 insertions(+), 39 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 87d3c15f07..7b85f64167 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -784,6 +784,10 @@ struct mlx5_obj_ops {
void (*ind_table_destroy)(struct mlx5_ind_table_obj *ind_tbl);
int (*hrxq_new)(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
int tunnel __rte_unused);
+ int (*hrxq_modify)(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
+ const uint8_t *rss_key,
+ uint64_t hash_fields,
+ const struct mlx5_ind_table_obj *ind_tbl);
void (*hrxq_destroy)(struct mlx5_hrxq *hrxq);
int (*drop_action_create)(struct rte_eth_dev *dev);
void (*drop_action_destroy)(struct rte_eth_dev *dev);
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 11bda32557..600afd5929 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -731,33 +731,39 @@ mlx5_devx_ind_table_destroy(struct mlx5_ind_table_obj *ind_tbl)
}
/**
- * Create an Rx Hash queue.
+ * Set TIR attribute struct with relevant input values.
*
- * @param dev
+ * @param[in] dev
* Pointer to Ethernet device.
- * @param hrxq
- * Pointer to Rx Hash queue.
- * @param tunnel
+ * @param[in] rss_key
+ * RSS key for the Rx hash queue.
+ * @param[in] hash_fields
+ * Verbs protocol hash field to make the RSS on.
+ * @param[in] ind_tbl
+ * Indirection table for TIR.
+ * @param[in] queues
+ * Queues entering in hash queue. In case of empty hash_fields only the
+ * first queue index will be taken for the indirection table.
+ * @param[in] queues_n
+ * Number of queues.
+ * @param[in] tunnel
* Tunnel type.
+ * @param[out] tir_attr
+ * Parameters structure for TIR creation/modification.
*
* @return
- * 0 on success, a negative errno value otherwise and rte_errno is set.
+ * The Verbs/DevX object initialised index, 0 otherwise and rte_errno is set.
*/
-static int
-mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
- int tunnel __rte_unused)
+static void
+mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
+ uint64_t hash_fields,
+ const struct mlx5_ind_table_obj *ind_tbl,
+ int tunnel, enum mlx5_rxq_type rxq_obj_type,
+ struct mlx5_devx_tir_attr *tir_attr)
{
struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_ind_table_obj *ind_tbl = hrxq->ind_table;
- struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[ind_tbl->queues[0]];
- struct mlx5_rxq_ctrl *rxq_ctrl =
- container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
- struct mlx5_devx_tir_attr tir_attr;
- const uint8_t *rss_key = hrxq->rss_key;
- uint64_t hash_fields = hrxq->hash_fields;
bool lro = true;
uint32_t i;
- int err;
/* Enable TIR LRO only if all the queues were configured for. */
for (i = 0; i < ind_tbl->queues_n; ++i) {
@@ -766,26 +772,24 @@ mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
break;
}
}
- memset(&tir_attr, 0, sizeof(tir_attr));
- tir_attr.disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
- tir_attr.rx_hash_fn = MLX5_RX_HASH_FN_TOEPLITZ;
- tir_attr.tunneled_offload_en = !!tunnel;
+ memset(tir_attr, 0, sizeof(*tir_attr));
+ tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
+ tir_attr->rx_hash_fn = MLX5_RX_HASH_FN_TOEPLITZ;
+ tir_attr->tunneled_offload_en = !!tunnel;
/* If needed, translate hash_fields bitmap to PRM format. */
if (hash_fields) {
- struct mlx5_rx_hash_field_select *rx_hash_field_select = NULL;
+ struct mlx5_rx_hash_field_select *rx_hash_field_select =
#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
- rx_hash_field_select = hash_fields & IBV_RX_HASH_INNER ?
- &tir_attr.rx_hash_field_selector_inner :
- &tir_attr.rx_hash_field_selector_outer;
-#else
- rx_hash_field_select = &tir_attr.rx_hash_field_selector_outer;
+ hash_fields & IBV_RX_HASH_INNER ?
+ &tir_attr->rx_hash_field_selector_inner :
#endif
+ &tir_attr->rx_hash_field_selector_outer;
/* 1 bit: 0: IPv4, 1: IPv6. */
rx_hash_field_select->l3_prot_type =
!!(hash_fields & MLX5_IPV6_IBV_RX_HASH);
/* 1 bit: 0: TCP, 1: UDP. */
rx_hash_field_select->l4_prot_type =
- !!(hash_fields & MLX5_UDP_IBV_RX_HASH);
+ !!(hash_fields & MLX5_UDP_IBV_RX_HASH);
/* Bitmask which sets which fields to use in RX Hash. */
rx_hash_field_select->selected_fields =
((!!(hash_fields & MLX5_L3_SRC_IBV_RX_HASH)) <<
@@ -797,20 +801,53 @@ mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
(!!(hash_fields & MLX5_L4_DST_IBV_RX_HASH)) <<
MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_DPORT;
}
- if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
- tir_attr.transport_domain = priv->sh->td->id;
+ if (rxq_obj_type == MLX5_RXQ_TYPE_HAIRPIN)
+ tir_attr->transport_domain = priv->sh->td->id;
else
- tir_attr.transport_domain = priv->sh->tdn;
- memcpy(tir_attr.rx_hash_toeplitz_key, rss_key, MLX5_RSS_HASH_KEY_LEN);
- tir_attr.indirect_table = ind_tbl->rqt->id;
+ tir_attr->transport_domain = priv->sh->tdn;
+ memcpy(tir_attr->rx_hash_toeplitz_key, rss_key, MLX5_RSS_HASH_KEY_LEN);
+ tir_attr->indirect_table = ind_tbl->rqt->id;
if (dev->data->dev_conf.lpbk_mode)
- tir_attr.self_lb_block = MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
+ tir_attr->self_lb_block =
+ MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
if (lro) {
- tir_attr.lro_timeout_period_usecs = priv->config.lro.timeout;
- tir_attr.lro_max_msg_sz = priv->max_lro_msg_size;
- tir_attr.lro_enable_mask = MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO |
- MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO;
+ tir_attr->lro_timeout_period_usecs = priv->config.lro.timeout;
+ tir_attr->lro_max_msg_sz = priv->max_lro_msg_size;
+ tir_attr->lro_enable_mask =
+ MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO |
+ MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO;
}
+}
+
+/**
+ * Create an Rx Hash queue.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param hrxq
+ * Pointer to Rx Hash queue.
+ * @param tunnel
+ * Tunnel type.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
+ int tunnel __rte_unused)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_ind_table_obj *ind_tbl = hrxq->ind_table;
+ struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[ind_tbl->queues[0]];
+ struct mlx5_rxq_ctrl *rxq_ctrl =
+ container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+ struct mlx5_devx_tir_attr tir_attr = {0};
+ const uint8_t *rss_key = hrxq->rss_key;
+ uint64_t hash_fields = hrxq->hash_fields;
+ int err;
+
+ mlx5_devx_tir_attr_set(dev, rss_key, hash_fields, ind_tbl, tunnel,
+ rxq_ctrl->type, &tir_attr);
hrxq->tir = mlx5_devx_cmd_create_tir(priv->sh->ctx, &tir_attr);
if (!hrxq->tir) {
DRV_LOG(ERR, "Port %u cannot create DevX TIR.",
@@ -847,6 +884,65 @@ mlx5_devx_tir_destroy(struct mlx5_hrxq *hrxq)
claim_zero(mlx5_devx_cmd_destroy(hrxq->tir));
}
+/**
+ * Modify an Rx Hash queue configuration.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param hrxq
+ * Hash Rx queue to modify.
+ * @param rss_key
+ * RSS key for the Rx hash queue.
+ * @param hash_fields
+ * Verbs protocol hash field to make the RSS on.
+ * @param queues
+ * Queues entering in hash queue. In case of empty hash_fields only the
+ * first queue index will be taken for the indirection table.
+ * @param queues_n
+ * Number of queues.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_devx_hrxq_modify(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
+ const uint8_t *rss_key,
+ uint64_t hash_fields,
+ const struct mlx5_ind_table_obj *ind_tbl)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[ind_tbl->queues[0]];
+ struct mlx5_rxq_ctrl *rxq_ctrl =
+ container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
+ enum mlx5_rxq_type rxq_obj_type = rxq_ctrl->type;
+ struct mlx5_devx_modify_tir_attr modify_tir = {0};
+
+ /*
+ * untested for modification fields:
+ * - rx_hash_symmetric not set in hrxq_new(),
+ * - rx_hash_fn set hard-coded in hrxq_new(),
+ * - lro_xxx not set after rxq setup
+ */
+ if (ind_tbl != hrxq->ind_table)
+ modify_tir.modify_bitmask |=
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_INDIRECT_TABLE;
+ if (hash_fields != hrxq->hash_fields ||
+ memcmp(hrxq->rss_key, rss_key, MLX5_RSS_HASH_KEY_LEN))
+ modify_tir.modify_bitmask |=
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_HASH;
+ mlx5_devx_tir_attr_set(dev, rss_key, hash_fields, ind_tbl,
+ 0, /* N/A - tunnel modification unsupported */
+ rxq_obj_type, &modify_tir.tir);
+ modify_tir.tirn = hrxq->tir->id;
+ if (mlx5_devx_cmd_modify_tir(hrxq->tir, &modify_tir)) {
+ DRV_LOG(ERR, "port %u cannot modify DevX TIR",
+ dev->data->port_id);
+ rte_errno = errno;
+ return -rte_errno;
+ }
+ return 0;
+}
+
/**
* Create a DevX drop action for Rx Hash queue.
*
@@ -1357,6 +1453,7 @@ struct mlx5_obj_ops devx_obj_ops = {
.ind_table_destroy = mlx5_devx_ind_table_destroy,
.hrxq_new = mlx5_devx_hrxq_new,
.hrxq_destroy = mlx5_devx_tir_destroy,
+ .hrxq_modify = mlx5_devx_hrxq_modify,
.drop_action_create = mlx5_devx_drop_action_create,
.drop_action_destroy = mlx5_devx_drop_action_destroy,
.txq_obj_new = mlx5_txq_devx_obj_new,
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index f1d8373079..deb07428df 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1706,6 +1706,29 @@ mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx)
return MLX5_RXQ_TYPE_UNDEFINED;
}
+/**
+ * Match queues listed in arguments to queues contained in indirection table
+ * object.
+ *
+ * @param ind_tbl
+ * Pointer to indirection table to match.
+ * @param queues
+ * Queues to match to ques in indirection table.
+ * @param queues_n
+ * Number of queues in the array.
+ *
+ * @return
+ * 1 if all queues in indirection table match 0 othrwise.
+ */
+static int
+mlx5_ind_table_obj_match_queues(const struct mlx5_ind_table_obj *ind_tbl,
+ const uint16_t *queues, uint32_t queues_n)
+{
+ return (ind_tbl->queues_n == queues_n) &&
+ (!memcmp(ind_tbl->queues, queues,
+ ind_tbl->queues_n * sizeof(ind_tbl->queues[0])));
+}
+
/**
* Get an indirection table.
*
@@ -1902,6 +1925,86 @@ mlx5_hrxq_get(struct rte_eth_dev *dev,
return 0;
}
+/**
+ * Modify an Rx Hash queue configuration.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param hrxq
+ * Index to Hash Rx queue to modify.
+ * @param rss_key
+ * RSS key for the Rx hash queue.
+ * @param rss_key_len
+ * RSS key length.
+ * @param hash_fields
+ * Verbs protocol hash field to make the RSS on.
+ * @param queues
+ * Queues entering in hash queue. In case of empty hash_fields only the
+ * first queue index will be taken for the indirection table.
+ * @param queues_n
+ * Number of queues.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hrxq_idx,
+ const uint8_t *rss_key, uint32_t rss_key_len,
+ uint64_t hash_fields,
+ const uint16_t *queues, uint32_t queues_n)
+{
+ int err;
+ struct mlx5_ind_table_obj *ind_tbl = NULL;
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_hrxq *hrxq =
+ mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx);
+ int ret;
+
+ if (!hrxq) {
+ rte_errno = EINVAL;
+ return -rte_errno;
+ }
+ /* validations */
+ if (hrxq->rss_key_len != rss_key_len) {
+ /* rss_key_len is fixed size 40 byte & not supposed to change */
+ rte_errno = EINVAL;
+ return -rte_errno;
+ }
+
+ queues_n = hash_fields ? queues_n : 1;
+ if (mlx5_ind_table_obj_match_queues(hrxq->ind_table,
+ queues, queues_n)) {
+ ind_tbl = hrxq->ind_table;
+ } else {
+ ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
+ if (!ind_tbl)
+ ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n);
+ }
+ if (!ind_tbl) {
+ rte_errno = ENOMEM;
+ return -rte_errno;
+ }
+ ret = priv->obj_ops.hrxq_modify(dev, hrxq, rss_key, hash_fields,
+ ind_tbl);
+ if (ret) {
+ rte_errno = errno;
+ goto error;
+ }
+ if (ind_tbl != hrxq->ind_table) {
+ mlx5_ind_table_obj_release(dev, hrxq->ind_table);
+ hrxq->ind_table = ind_tbl;
+ }
+ hrxq->hash_fields = hash_fields;
+ memcpy(hrxq->rss_key, rss_key, rss_key_len);
+ return 0;
+error:
+ err = rte_errno;
+ if (ind_tbl != hrxq->ind_table)
+ mlx5_ind_table_obj_release(dev, ind_tbl);
+ rte_errno = err;
+ return -rte_errno;
+}
+
/**
* Release the hash Rx queue.
*
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 674296ee98..09499cc730 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -347,7 +347,10 @@ void mlx5_drop_action_destroy(struct rte_eth_dev *dev);
uint64_t mlx5_get_rx_port_offloads(void);
uint64_t mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev);
void mlx5_rxq_timestamp_set(struct rte_eth_dev *dev);
-
+int mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hxrq_idx,
+ const uint8_t *rss_key, uint32_t rss_key_len,
+ uint64_t hash_fields,
+ const uint16_t *queues, uint32_t queues_n);
/* mlx5_txq.c */
--
2.26.2
^ permalink raw reply [flat|nested] 19+ messages in thread
* [dpdk-dev] [PATCH 3/4] net/mlx5: shared action PMD
2020-10-08 12:18 [dpdk-dev] [PATCH 0/4] Shared action RSS PMD impl Andrey Vesnovaty
2020-10-08 12:18 ` [dpdk-dev] [PATCH 1/4] common/mlx5: modify advanced Rx object via DevX Andrey Vesnovaty
2020-10-08 12:18 ` [dpdk-dev] [PATCH 2/4] net/mlx5: modify hash Rx queue objects Andrey Vesnovaty
@ 2020-10-08 12:18 ` Andrey Vesnovaty
2020-10-08 12:18 ` [dpdk-dev] [PATCH 4/4] net/mlx5: driver support for shared action Andrey Vesnovaty
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 0/4] Shared action RSS PMD impl Andrey Vesnovaty
4 siblings, 0 replies; 19+ messages in thread
From: Andrey Vesnovaty @ 2020-10-08 12:18 UTC (permalink / raw)
To: dev
Cc: jer, jerinjacobk, thomas, ferruh.yigit, stephen,
bruce.richardson, orika, viacheslavo, andrey.vesnovaty, mdr,
nhorman, ajit.khaparde, samik.gupta, Andrey Vesnovaty,
Matan Azrad, Shahaf Shuler
From: Andrey Vesnovaty <andreyv@mellanox.com>
Implement rte_flow shared action API for mlx5 PMD.
Handle shared action on flow create/destroy.
Signed-off-by: Andrey Vesnovaty <andreyv@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 1 +
drivers/net/mlx5/mlx5.h | 2 +
drivers/net/mlx5/mlx5_defs.h | 3 +
drivers/net/mlx5/mlx5_flow.c | 497 ++++++++++++++++++++++++++++++++---
drivers/net/mlx5/mlx5_flow.h | 86 ++++++
5 files changed, 557 insertions(+), 32 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index e5ca392fed..562c4a3e33 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1384,6 +1384,7 @@ mlx5_dev_close(struct rte_eth_dev *dev)
* then this will return directly without any action.
*/
mlx5_flow_list_flush(dev, &priv->flows, true);
+ mlx5_shared_action_flush(dev);
mlx5_flow_meter_flush(dev, NULL);
/* Free the intermediate buffers for flow creation. */
mlx5_flow_free_intermediate(dev);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 7b85f64167..879cc9a51e 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -877,6 +877,8 @@ struct mlx5_priv {
uint8_t fdb_def_rule; /* Whether fdb jump to table 1 is configured. */
struct mlx5_mp_id mp_id; /* ID of a multi-process process */
LIST_HEAD(fdir, mlx5_fdir_flow) fdir_flows; /* fdir flows. */
+ LIST_HEAD(shared_action, rte_flow_shared_action) shared_actions;
+ /* shared actions */
};
#define PORT_ID(priv) ((priv)->dev_data->port_id)
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index 0df47391ee..22e41df1eb 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -197,6 +197,9 @@
#define MLX5_HAIRPIN_QUEUE_STRIDE 6
#define MLX5_HAIRPIN_JUMBO_LOG_SIZE (14 + 2)
+/* Maximum number of shared actions supported by rte_flow */
+#define MLX5_MAX_SHARED_ACTIONS 1
+
/* Definition of static_assert found in /usr/include/assert.h */
#ifndef HAVE_STATIC_ASSERT
#define static_assert _Static_assert
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index a94f63005c..91e9e546bc 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -220,6 +220,26 @@ static const struct rte_flow_expand_node mlx5_support_expansion[] = {
},
};
+static struct rte_flow_shared_action *
+mlx5_shared_action_create(struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error);
+static int mlx5_shared_action_destroy
+ (struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *shared_action,
+ struct rte_flow_error *error);
+static int mlx5_shared_action_update
+ (struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *shared_action,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error);
+static int mlx5_shared_action_query
+ (struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action *action,
+ void *data,
+ struct rte_flow_error *error);
+
static const struct rte_flow_ops mlx5_flow_ops = {
.validate = mlx5_flow_validate,
.create = mlx5_flow_create,
@@ -229,6 +249,10 @@ static const struct rte_flow_ops mlx5_flow_ops = {
.query = mlx5_flow_query,
.dev_dump = mlx5_flow_dev_dump,
.get_aged_flows = mlx5_flow_get_aged_flows,
+ .shared_action_create = mlx5_shared_action_create,
+ .shared_action_destroy = mlx5_shared_action_destroy,
+ .shared_action_update = mlx5_shared_action_update,
+ .shared_action_query = mlx5_shared_action_query,
};
/* Convert FDIR request to Generic flow. */
@@ -995,16 +1019,10 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
/*
* Validate the rss action.
*
- * @param[in] action
- * Pointer to the queue action.
- * @param[in] action_flags
- * Bit-fields that holds the actions detected until now.
* @param[in] dev
* Pointer to the Ethernet device structure.
- * @param[in] attr
- * Attributes of flow that includes this action.
- * @param[in] item_flags
- * Items that were detected.
+ * @param[in] action
+ * Pointer to the queue action.
* @param[out] error
* Pointer to error structure.
*
@@ -1012,23 +1030,14 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
int
-mlx5_flow_validate_action_rss(const struct rte_flow_action *action,
- uint64_t action_flags,
- struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- uint64_t item_flags,
- struct rte_flow_error *error)
+mlx5_validate_action_rss(struct rte_eth_dev *dev,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
{
struct mlx5_priv *priv = dev->data->dev_private;
const struct rte_flow_action_rss *rss = action->conf;
- int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
unsigned int i;
- if (action_flags & MLX5_FLOW_FATE_ACTIONS)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, NULL,
- "can't have 2 fate actions"
- " in same flow");
if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT &&
rss->func != RTE_ETH_HASH_FUNCTION_TOEPLITZ)
return rte_flow_error_set(error, ENOTSUP,
@@ -1074,15 +1083,17 @@ mlx5_flow_validate_action_rss(const struct rte_flow_action *action,
if ((rss->types & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) &&
!(rss->types & ETH_RSS_IP))
return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
- "L3 partial RSS requested but L3 RSS"
- " type not specified");
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ NULL,
+ "L3 partial RSS requested but L3 "
+ "RSS type not specified");
if ((rss->types & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) &&
!(rss->types & (ETH_RSS_UDP | ETH_RSS_TCP)))
return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
- "L4 partial RSS requested but L4 RSS"
- " type not specified");
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ NULL,
+ "L4 partial RSS requested but L4 "
+ "RSS type not specified");
if (!priv->rxqs_n)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF,
@@ -1099,17 +1110,62 @@ mlx5_flow_validate_action_rss(const struct rte_flow_action *action,
&rss->queue[i], "queue index out of range");
if (!(*priv->rxqs)[rss->queue[i]])
return rte_flow_error_set
- (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ (error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
&rss->queue[i], "queue is not configured");
}
+ return 0;
+}
+
+/*
+ * Validate the rss action.
+ *
+ * @param[in] action
+ * Pointer to the queue action.
+ * @param[in] action_flags
+ * Bit-fields that holds the actions detected until now.
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in] attr
+ * Attributes of flow that includes this action.
+ * @param[in] item_flags
+ * Items that were detected.
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_flow_validate_action_rss(const struct rte_flow_action *action,
+ uint64_t action_flags,
+ struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint64_t item_flags,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_action_rss *rss = action->conf;
+ int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
+ int ret;
+
+ if (action_flags & MLX5_FLOW_FATE_ACTIONS)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't have 2 fate actions"
+ " in same flow");
+ ret = mlx5_validate_action_rss(dev, action, error);
+ if (ret)
+ return ret;
if (attr->egress)
return rte_flow_error_set(error, ENOTSUP,
- RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
+ RTE_FLOW_ERROR_TYPE_ATTR_EGRESS,
+ NULL,
"rss action not supported for "
"egress");
if (rss->level > 1 && !tunnel)
return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF,
+ NULL,
"inner RSS is not supported for "
"non-tunnel flows");
if ((item_flags & MLX5_FLOW_LAYER_ECPRI) &&
@@ -2726,6 +2782,131 @@ flow_get_rss_action(const struct rte_flow_action actions[])
return NULL;
}
+/* maps shared action to translated non shared in some actions array */
+struct mlx5_translated_shared_action {
+ struct rte_flow_shared_action *action; /**< Shared action */
+ int index; /**< Index in related array of rte_flow_action */
+};
+
+/**
+ * Translates actions of type RTE_FLOW_ACTION_TYPE_SHARED to related
+ * non shared action if translation possible.
+ * This functionality used to run same execution path for both shared & non
+ * shared actions on flow create. All necessary preparations for shared
+ * action handling should be preformed on *shared* actions list returned by
+ * from this call.
+ *
+ * @param[in] actions
+ * List of actions to translate.
+ * @param[out] shared
+ * List to store translated shared actions.
+ * @param[in, out] shared_n
+ * Size of *shared* array. On return should be updated with number of shared
+ * actions retrieved from the *actions* list.
+ * @param[out] translated_actions
+ * List of actions where all shared actions were translated to non shared
+ * if possible. NULL if no translation took place.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_shared_actions_translate(const struct rte_flow_action actions[],
+ struct mlx5_translated_shared_action *shared,
+ int *shared_n,
+ struct rte_flow_action **translated_actions,
+ struct rte_flow_error *error)
+{
+ struct rte_flow_action *translated = NULL;
+ int n;
+ int copied_n = 0;
+ struct mlx5_translated_shared_action *shared_end = NULL;
+
+ for (n = 0; actions[n].type != RTE_FLOW_ACTION_TYPE_END; n++) {
+ if (actions[n].type != RTE_FLOW_ACTION_TYPE_SHARED)
+ continue;
+ if (copied_n == *shared_n) {
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+ NULL, "too many shared actions");
+ }
+ rte_memcpy(&shared[copied_n].action, &actions[n].conf,
+ sizeof(actions[n].conf));
+ shared[copied_n].index = n;
+ copied_n++;
+ }
+ n++;
+ *shared_n = copied_n;
+ if (!copied_n)
+ return 0;
+ translated = rte_calloc(__func__, n, sizeof(struct rte_flow_action), 0);
+ rte_memcpy(translated, actions, n * sizeof(struct rte_flow_action));
+ for (shared_end = shared + copied_n; shared < shared_end; shared++) {
+ const struct rte_flow_shared_action *shared_action;
+
+ shared_action = shared->action;
+ switch (shared_action->type) {
+ case MLX5_FLOW_ACTION_SHARED_RSS:
+ translated[shared->index].type =
+ RTE_FLOW_ACTION_TYPE_RSS;
+ translated[shared->index].conf =
+ &shared_action->rss.origin;
+ break;
+ default:
+ rte_free(translated);
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "invalid shared action type");
+ }
+ }
+ *translated_actions = translated;
+ return 0;
+}
+
+/**
+ * Get Shared RSS action from the action list.
+ *
+ * @param[in] shared
+ * Pointer to the list of actions.
+ * @param[in] shared_n
+ * Actions list length.
+ *
+ * @return
+ * Pointer to the MLX5 RSS action if exist, else return NULL.
+ */
+static struct mlx5_shared_action_rss *
+flow_get_shared_rss_action(struct mlx5_translated_shared_action *shared,
+ int shared_n)
+{
+ struct mlx5_translated_shared_action *shared_end;
+
+ for (shared_end = shared + shared_n; shared < shared_end; shared++) {
+ struct rte_flow_shared_action *shared_action;
+
+ shared_action = shared->action;
+ switch (shared_action->type) {
+ case MLX5_FLOW_ACTION_SHARED_RSS:
+ rte_atomic32_inc(&shared_action->refcnt);
+ return &shared_action->rss;
+ default:
+ break;
+ }
+ }
+ return NULL;
+}
+
+struct rte_flow_shared_action *
+mlx5_flow_get_shared_rss(struct rte_flow *flow)
+{
+ if (flow->shared_rss)
+ return container_of(flow->shared_rss,
+ struct rte_flow_shared_action, rss);
+ else
+ return NULL;
+}
+
static unsigned int
find_graph_root(const struct rte_flow_item pattern[], uint32_t rss_level)
{
@@ -4324,13 +4505,16 @@ static uint32_t
flow_list_create(struct rte_eth_dev *dev, uint32_t *list,
const struct rte_flow_attr *attr,
const struct rte_flow_item items[],
- const struct rte_flow_action actions[],
+ const struct rte_flow_action original_actions[],
bool external, struct rte_flow_error *error)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct rte_flow *flow = NULL;
struct mlx5_flow *dev_flow;
const struct rte_flow_action_rss *rss;
+ struct mlx5_translated_shared_action
+ shared_actions[MLX5_MAX_SHARED_ACTIONS];
+ int shared_actions_n = MLX5_MAX_SHARED_ACTIONS;
union {
struct rte_flow_expand_rss buf;
uint8_t buffer[2048];
@@ -4350,14 +4534,23 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list,
struct rte_flow_expand_rss *buf = &expand_buffer.buf;
struct mlx5_flow_rss_desc *rss_desc = &((struct mlx5_flow_rss_desc *)
priv->rss_desc)[!!priv->flow_idx];
- const struct rte_flow_action *p_actions_rx = actions;
+ const struct rte_flow_action *p_actions_rx;
uint32_t i;
uint32_t idx = 0;
int hairpin_flow;
uint32_t hairpin_id = 0;
struct rte_flow_attr attr_tx = { .priority = 0 };
- int ret;
+ const struct rte_flow_action *actions;
+ struct rte_flow_action *translated_actions = NULL;
+ int ret = flow_shared_actions_translate(original_actions,
+ shared_actions,
+ &shared_actions_n,
+ &translated_actions, error);
+ if (ret < 0)
+ return 0;
+ actions = (translated_actions) ? translated_actions : original_actions;
+ p_actions_rx = actions;
hairpin_flow = flow_check_hairpin_split(dev, attr, actions);
ret = flow_drv_validate(dev, attr, items, p_actions_rx,
external, hairpin_flow, error);
@@ -4409,6 +4602,8 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list,
buf->entries = 1;
buf->entry[0].pattern = (void *)(uintptr_t)items;
}
+ flow->shared_rss = flow_get_shared_rss_action(shared_actions,
+ shared_actions_n);
/*
* Record the start index when there is a nested call. All sub-flows
* need to be translated before another calling.
@@ -4480,6 +4675,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list,
ILIST_INSERT(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], list, idx,
flow, next);
flow_rxq_flags_set(dev, flow);
+ rte_free(translated_actions);
/* Nested flow creation index recovery. */
priv->flow_idx = priv->flow_nested_idx;
if (priv->flow_nested_idx)
@@ -4494,6 +4690,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list,
rte_errno = ret; /* Restore rte_errno. */
error_before_flow:
ret = rte_errno;
+ rte_free(translated_actions);
if (hairpin_id)
mlx5_flow_id_release(priv->sh->flow_id_pool,
hairpin_id);
@@ -6310,3 +6507,239 @@ mlx5_flow_get_aged_flows(struct rte_eth_dev *dev, void **contexts,
dev->data->port_id);
return -ENOTSUP;
}
+
+/**
+ * Retrieve driver ops struct.
+ *
+ * @param[in] dev
+ * Pointer to the dev structure.
+ * @param[in] error_message
+ * Error message to set if driver ops struct not found.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialized in case of
+ * error only.
+ *
+ * @return
+ * Pointer to driver ops on success, otherwise NULL and rte_errno is set.
+ */
+static const struct mlx5_flow_driver_ops *
+flow_drv_dv_ops_get(struct rte_eth_dev *dev,
+ const char *error_message,
+ struct rte_flow_error *error)
+{
+ struct rte_flow_attr attr = { .transfer = 0 };
+
+ if (flow_get_drv_type(dev, &attr) != MLX5_FLOW_TYPE_DV) {
+ rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, error_message);
+ DRV_LOG(ERR, "port %u %s.", dev->data->port_id, error_message);
+ return NULL;
+ }
+
+ return flow_get_drv_ops(MLX5_FLOW_TYPE_DV);
+}
+
+/* Wrapper for driver action_validate op callback */
+static int
+flow_drv_action_validate(struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
+{
+ const struct mlx5_flow_driver_ops *fops = flow_drv_dv_ops_get(dev,
+ "action registration unsupported", error);
+ return (fops) ? fops->action_validate(dev, conf, action, error)
+ : -rte_errno;
+}
+
+/* Wrapper for driver action_create op callback */
+static struct rte_flow_shared_action *
+flow_drv_action_create(struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
+{
+ const struct mlx5_flow_driver_ops *fops = flow_drv_dv_ops_get(dev,
+ "action registration unsupported", error);
+ return (fops) ? fops->action_create(dev, conf, action, error) : NULL;
+}
+
+/**
+ * Destroys the shared action by handle.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ * @param[in] action
+ * Handle for the shared action to be destroyed.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. PMDs initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ *
+ * @note: wrapper for driver action_create op callback.
+ */
+static int
+mlx5_shared_action_destroy(struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *action,
+ struct rte_flow_error *error)
+{
+ const struct mlx5_flow_driver_ops *fops = flow_drv_dv_ops_get(dev,
+ "action registration unsupported", error);
+ return (fops) ? fops->action_destroy(dev, action, error) : -rte_errno;
+}
+
+/* Wrapper for driver action_destroy op callback */
+static int
+flow_drv_action_update(struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *action,
+ const void *action_conf,
+ struct rte_flow_error *error)
+{
+ const struct mlx5_flow_driver_ops *fops = flow_drv_dv_ops_get(dev,
+ "action registration unsupported", error);
+ return (fops) ? fops->action_update(dev, action,
+ action_conf, error)
+ : -rte_errno;
+}
+
+/**
+ * Create shared action for reuse in multiple flow rules.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ * @param[in] action
+ * Action configuration for shared action creation.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. PMDs initialize this
+ * structure in case of error only.
+ * @return
+ * A valid handle in case of success, NULL otherwise and rte_errno is set.
+ */
+static struct rte_flow_shared_action *
+mlx5_shared_action_create(struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
+{
+ if (flow_drv_action_validate(dev, conf, action, error))
+ return NULL;
+ return flow_drv_action_create(dev, conf, action, error);
+}
+
+/**
+ * Updates inplace the shared action configuration pointed by *action* handle
+ * with the configuration provided as *update* argument.
+ * The update of the shared action configuration effects all flow rules reusing
+ * the action via handle.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ * @param[in] action
+ * Handle for the shared action to be updated.
+ * @param[in] update
+ * Action specification used to modify the action pointed by handle.
+ * *update* should be of same type with the action pointed by the *action*
+ * handle argument, otherwise considered as invalid.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. PMDs initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_shared_action_update(struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *shared_action,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
+{
+ int ret;
+
+ switch (shared_action->type) {
+ case MLX5_FLOW_ACTION_SHARED_RSS:
+ if (action->type != RTE_FLOW_ACTION_TYPE_RSS) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL,
+ "update action type invalid");
+ }
+ ret = flow_drv_action_validate(dev, NULL, action, error);
+ if (ret)
+ return ret;
+ return flow_drv_action_update(dev, shared_action, action->conf,
+ error);
+ default:
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL,
+ "action type not supported");
+ }
+}
+
+/**
+ * Query the shared action by handle.
+ *
+ * This function allows retrieving action-specific data such as counters.
+ * Data is gathered by special action which may be present/referenced in
+ * more than one flow rule definition.
+ *
+ * \see RTE_FLOW_ACTION_TYPE_COUNT
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ * @param[in] action
+ * Handle for the shared action to query.
+ * @param[in, out] data
+ * Pointer to storage for the associated query data type.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. PMDs initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_shared_action_query(struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action *action,
+ void *data,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ switch (action->type) {
+ case MLX5_FLOW_ACTION_SHARED_RSS:
+ *((int32_t *)data) = rte_atomic32_read(&action->refcnt);
+ return 0;
+ default:
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL,
+ "action type not supported");
+ }
+}
+
+/**
+ * Destroy all shared actions.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_shared_action_flush(struct rte_eth_dev *dev)
+{
+ struct rte_flow_error error;
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct rte_flow_shared_action *action;
+ int ret = 0;
+
+ while (!LIST_EMPTY(&priv->shared_actions)) {
+ action = LIST_FIRST(&priv->shared_actions);
+ ret = mlx5_shared_action_destroy(dev, action, &error);
+ }
+ return ret;
+}
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 279daf21f5..c2d715a60b 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -196,6 +196,7 @@ enum mlx5_feature_name {
#define MLX5_FLOW_ACTION_SET_IPV6_DSCP (1ull << 33)
#define MLX5_FLOW_ACTION_AGE (1ull << 34)
#define MLX5_FLOW_ACTION_DEFAULT_MISS (1ull << 35)
+#define MLX5_FLOW_ACTION_SHARED_RSS (1ull << 36)
#define MLX5_FLOW_FATE_ACTIONS \
(MLX5_FLOW_ACTION_DROP | MLX5_FLOW_ACTION_QUEUE | \
@@ -843,6 +844,7 @@ struct mlx5_fdir_flow {
/* Flow structure. */
struct rte_flow {
ILIST_ENTRY(uint32_t)next; /**< Index to the next flow structure. */
+ struct mlx5_shared_action_rss *shared_rss; /** < Shred RSS action. */
uint32_t dev_handles;
/**< Device flow handles that are part of the flow. */
uint32_t drv_type:2; /**< Driver type. */
@@ -856,6 +858,62 @@ struct rte_flow {
uint16_t meter; /**< Holds flow meter id. */
} __rte_packed;
+/*
+ * Define list of valid combinations of RX Hash fields
+ * (see enum ibv_rx_hash_fields).
+ */
+#define MLX5_RSS_HASH_IPV4 (IBV_RX_HASH_SRC_IPV4 | IBV_RX_HASH_DST_IPV4)
+#define MLX5_RSS_HASH_IPV4_TCP \
+ (MLX5_RSS_HASH_IPV4 | \
+ IBV_RX_HASH_SRC_PORT_TCP | IBV_RX_HASH_SRC_PORT_TCP)
+#define MLX5_RSS_HASH_IPV4_UDP \
+ (MLX5_RSS_HASH_IPV4 | \
+ IBV_RX_HASH_SRC_PORT_UDP | IBV_RX_HASH_SRC_PORT_UDP)
+#define MLX5_RSS_HASH_IPV6 (IBV_RX_HASH_SRC_IPV6 | IBV_RX_HASH_DST_IPV6)
+#define MLX5_RSS_HASH_IPV6_TCP \
+ (MLX5_RSS_HASH_IPV6 | \
+ IBV_RX_HASH_SRC_PORT_TCP | IBV_RX_HASH_SRC_PORT_TCP)
+#define MLX5_RSS_HASH_IPV6_UDP \
+ (MLX5_RSS_HASH_IPV6 | \
+ IBV_RX_HASH_SRC_PORT_UDP | IBV_RX_HASH_SRC_PORT_UDP)
+#define MLX5_RSS_HASH_NONE 0ULL
+
+/* array of valid combinations of RX Hash fields for RSS */
+static const uint64_t mlx5_rss_hash_fields[] = {
+ MLX5_RSS_HASH_IPV4,
+ MLX5_RSS_HASH_IPV4_TCP,
+ MLX5_RSS_HASH_IPV4_UDP,
+ MLX5_RSS_HASH_IPV6,
+ MLX5_RSS_HASH_IPV6_TCP,
+ MLX5_RSS_HASH_IPV6_UDP,
+ MLX5_RSS_HASH_NONE,
+};
+
+#define MLX5_RSS_HASH_FIELDS_LEN RTE_DIM(mlx5_rss_hash_fields)
+
+/* Shared RSS action structure */
+struct mlx5_shared_action_rss {
+ struct rte_flow_action_rss origin; /**< Original rte RSS action. */
+ uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
+ uint16_t *queue; /**< Queue indices to use. */
+ uint32_t hrxq[MLX5_RSS_HASH_FIELDS_LEN];
+ /**< Hash RX queue indexes mapped to mlx5_rss_hash_fields */
+ uint32_t hrxq_tunnel[MLX5_RSS_HASH_FIELDS_LEN];
+ /**< Hash RX queue indexes for tunneled RSS */
+};
+
+struct rte_flow_shared_action {
+ LIST_ENTRY(rte_flow_shared_action) next;
+ /**< Pointer to the next element. */
+ rte_atomic32_t refcnt;
+ uint64_t type;
+ /**< Shared action type (see MLX5_FLOW_ACTION_SHARED_*). */
+ union {
+ struct mlx5_shared_action_rss rss;
+ /**< Shared RSS action. */
+ };
+};
+
typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item items[],
@@ -910,6 +968,25 @@ typedef int (*mlx5_flow_get_aged_flows_t)
void **context,
uint32_t nb_contexts,
struct rte_flow_error *error);
+typedef int (*mlx5_flow_action_validate_t)
+ (struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error);
+typedef struct rte_flow_shared_action *(*mlx5_flow_action_create_t)
+ (struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error);
+typedef int (*mlx5_flow_action_destroy_t)
+ (struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *action,
+ struct rte_flow_error *error);
+typedef int (*mlx5_flow_action_update_t)
+ (struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *action,
+ const void *action_conf,
+ struct rte_flow_error *error);
struct mlx5_flow_driver_ops {
mlx5_flow_validate_t validate;
mlx5_flow_prepare_t prepare;
@@ -926,6 +1003,10 @@ struct mlx5_flow_driver_ops {
mlx5_flow_counter_free_t counter_free;
mlx5_flow_counter_query_t counter_query;
mlx5_flow_get_aged_flows_t get_aged_flows;
+ mlx5_flow_action_validate_t action_validate;
+ mlx5_flow_action_create_t action_create;
+ mlx5_flow_action_destroy_t action_destroy;
+ mlx5_flow_action_update_t action_update;
};
/* mlx5_flow.c */
@@ -951,6 +1032,9 @@ int mlx5_flow_get_reg_id(struct rte_eth_dev *dev,
const struct rte_flow_action *mlx5_flow_find_action
(const struct rte_flow_action *actions,
enum rte_flow_action_type action);
+int mlx5_validate_action_rss(struct rte_eth_dev *dev,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error);
int mlx5_flow_validate_action_count(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
struct rte_flow_error *error);
@@ -1069,4 +1153,6 @@ int mlx5_flow_destroy_policer_rules(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr);
int mlx5_flow_meter_flush(struct rte_eth_dev *dev,
struct rte_mtr_error *error);
+struct rte_flow_shared_action *mlx5_flow_get_shared_rss(struct rte_flow *flow);
+int mlx5_shared_action_flush(struct rte_eth_dev *dev);
#endif /* RTE_PMD_MLX5_FLOW_H_ */
--
2.26.2
^ permalink raw reply [flat|nested] 19+ messages in thread
* [dpdk-dev] [PATCH 4/4] net/mlx5: driver support for shared action
2020-10-08 12:18 [dpdk-dev] [PATCH 0/4] Shared action RSS PMD impl Andrey Vesnovaty
` (2 preceding siblings ...)
2020-10-08 12:18 ` [dpdk-dev] [PATCH 3/4] net/mlx5: shared action PMD Andrey Vesnovaty
@ 2020-10-08 12:18 ` Andrey Vesnovaty
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 0/4] Shared action RSS PMD impl Andrey Vesnovaty
4 siblings, 0 replies; 19+ messages in thread
From: Andrey Vesnovaty @ 2020-10-08 12:18 UTC (permalink / raw)
To: dev
Cc: jer, jerinjacobk, thomas, ferruh.yigit, stephen,
bruce.richardson, orika, viacheslavo, andrey.vesnovaty, mdr,
nhorman, ajit.khaparde, samik.gupta, Andrey Vesnovaty,
Matan Azrad, Shahaf Shuler
From: Andrey Vesnovaty <andreyv@mellanox.com>
Implement shared action create/destroy/update/query.
Implement RSS shared action and handle shared RSS on
flow apply and release.
Note: currently implemented for sharede RSS action only
Signed-off-by: Andrey Vesnovaty <andreyv@mellanox.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 684 ++++++++++++++++++++++++++++++--
1 file changed, 661 insertions(+), 23 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 79fdf34c0e..b99db65d2d 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -8928,6 +8928,157 @@ __flow_dv_translate(struct rte_eth_dev *dev,
return 0;
}
+/**
+ * Set hash RX queue by hash fields (see enum ibv_rx_hash_fields)
+ * and tunnel.
+ *
+ * @param[in, out] action
+ * Shred RSS action holding hash RX queue objects.
+ * @param[in] hash_fields
+ * Defines combination of packet fields to participate in RX hash.
+ * @param[in] tunnel
+ * Tunnel type
+ * @param[in] hrxq_idx
+ * Hash RX queue index to set.
+ *
+ * @return
+ * 0 on success, otherwise negative errno value.
+ */
+static int
+__flow_dv_action_rss_hrxq_set(struct mlx5_shared_action_rss *action,
+ const uint64_t hash_fields,
+ const int tunnel,
+ uint32_t hrxq_idx)
+{
+ uint32_t *hrxqs = (tunnel) ? action->hrxq : action->hrxq_tunnel;
+
+ switch (hash_fields & ~IBV_RX_HASH_INNER) {
+ case MLX5_RSS_HASH_IPV4:
+ hrxqs[0] = hrxq_idx;
+ return 0;
+ case MLX5_RSS_HASH_IPV4_TCP:
+ hrxqs[1] = hrxq_idx;
+ return 0;
+ case MLX5_RSS_HASH_IPV4_UDP:
+ hrxqs[2] = hrxq_idx;
+ return 0;
+ case MLX5_RSS_HASH_IPV6:
+ hrxqs[3] = hrxq_idx;
+ return 0;
+ case MLX5_RSS_HASH_IPV6_TCP:
+ hrxqs[4] = hrxq_idx;
+ return 0;
+ case MLX5_RSS_HASH_IPV6_UDP:
+ hrxqs[5] = hrxq_idx;
+ return 0;
+ case MLX5_RSS_HASH_NONE:
+ hrxqs[6] = hrxq_idx;
+ return 0;
+ default:
+ return -1;
+ }
+}
+
+/**
+ * Look up for hash RX queue by hash fields (see enum ibv_rx_hash_fields)
+ * and tunnel.
+ *
+ * @param[in] action
+ * Shred RSS action holding hash RX queue objects.
+ * @param[in] hash_fields
+ * Defines combination of packet fields to participate in RX hash.
+ * @param[in] tunnel
+ * Tunnel type
+ *
+ * @return
+ * Valid hash RX queue index, otherwise 0.
+ */
+static uint32_t
+__flow_dv_action_rss_hrxq_lookup(const struct mlx5_shared_action_rss *action,
+ const uint64_t hash_fields,
+ const int tunnel)
+{
+ const uint32_t *hrxqs = (tunnel) ? action->hrxq : action->hrxq_tunnel;
+
+ switch (hash_fields & ~IBV_RX_HASH_INNER) {
+ case MLX5_RSS_HASH_IPV4:
+ return hrxqs[0];
+ case MLX5_RSS_HASH_IPV4_TCP:
+ return hrxqs[1];
+ case MLX5_RSS_HASH_IPV4_UDP:
+ return hrxqs[2];
+ case MLX5_RSS_HASH_IPV6:
+ return hrxqs[3];
+ case MLX5_RSS_HASH_IPV6_TCP:
+ return hrxqs[4];
+ case MLX5_RSS_HASH_IPV6_UDP:
+ return hrxqs[5];
+ case MLX5_RSS_HASH_NONE:
+ return hrxqs[6];
+ default:
+ return 0;
+ }
+}
+
+/**
+ * Retrieves hash RX queue suitable for the *flow*.
+ * If shared action configured for *flow* suitable hash RX queue will be
+ * retrieved from attached shared action.
+ *
+ * @param[in] flow
+ * Shred RSS action holding hash RX queue objects.
+ * @param[in] dev_flow
+ * Pointer to the sub flow.
+ * @param[out] hrxq
+ * Pointer to retrieved hash RX queue object.
+ *
+ * @return
+ * Valid hash RX queue index, otherwise 0 and rte_errno is set.
+ */
+static uint32_t
+__flow_dv_rss_get_hrxq(struct rte_eth_dev *dev, struct rte_flow *flow,
+ struct mlx5_flow *dev_flow,
+ struct mlx5_hrxq **hrxq)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ uint32_t hrxq_idx;
+
+ if (flow->shared_rss) {
+ hrxq_idx = __flow_dv_action_rss_hrxq_lookup
+ (flow->shared_rss, dev_flow->hash_fields,
+ !!(dev_flow->handle->layers &
+ MLX5_FLOW_LAYER_TUNNEL));
+ if (hrxq_idx) {
+ *hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ],
+ hrxq_idx);
+ rte_atomic32_inc(&(*hrxq)->refcnt);
+ }
+ } else {
+ struct mlx5_flow_rss_desc *rss_desc =
+ &((struct mlx5_flow_rss_desc *)priv->rss_desc)
+ [!!priv->flow_nested_idx];
+
+ MLX5_ASSERT(rss_desc->queue_num);
+ hrxq_idx = mlx5_hrxq_get(dev, rss_desc->key,
+ MLX5_RSS_HASH_KEY_LEN,
+ dev_flow->hash_fields,
+ rss_desc->queue, rss_desc->queue_num);
+ if (!hrxq_idx) {
+ hrxq_idx = mlx5_hrxq_new(dev,
+ rss_desc->key,
+ MLX5_RSS_HASH_KEY_LEN,
+ dev_flow->hash_fields,
+ rss_desc->queue,
+ rss_desc->queue_num,
+ !!(dev_flow->handle->layers &
+ MLX5_FLOW_LAYER_TUNNEL));
+ }
+ *hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ],
+ hrxq_idx);
+ }
+ return hrxq_idx;
+}
+
/**
* Apply the flow to the NIC, lock free,
* (mutex should be acquired by caller).
@@ -8986,30 +9137,10 @@ __flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
dv->actions[n++] = drop_hrxq->action;
}
} else if (dh->fate_action == MLX5_FLOW_FATE_QUEUE) {
- struct mlx5_hrxq *hrxq;
- uint32_t hrxq_idx;
- struct mlx5_flow_rss_desc *rss_desc =
- &((struct mlx5_flow_rss_desc *)priv->rss_desc)
- [!!priv->flow_nested_idx];
+ struct mlx5_hrxq *hrxq = NULL;
+ uint32_t hrxq_idx = __flow_dv_rss_get_hrxq
+ (dev, flow, dev_flow, &hrxq);
- MLX5_ASSERT(rss_desc->queue_num);
- hrxq_idx = mlx5_hrxq_get(dev, rss_desc->key,
- MLX5_RSS_HASH_KEY_LEN,
- dev_flow->hash_fields,
- rss_desc->queue,
- rss_desc->queue_num);
- if (!hrxq_idx) {
- hrxq_idx = mlx5_hrxq_new
- (dev, rss_desc->key,
- MLX5_RSS_HASH_KEY_LEN,
- dev_flow->hash_fields,
- rss_desc->queue,
- rss_desc->queue_num,
- !!(dh->layers &
- MLX5_FLOW_LAYER_TUNNEL));
- }
- hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ],
- hrxq_idx);
if (!hrxq) {
rte_flow_error_set
(error, rte_errno,
@@ -9427,12 +9558,16 @@ __flow_dv_remove(struct rte_eth_dev *dev, struct rte_flow *flow)
static void
__flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
{
+ struct rte_flow_shared_action *shared;
struct mlx5_flow_handle *dev_handle;
struct mlx5_priv *priv = dev->data->dev_private;
if (!flow)
return;
__flow_dv_remove(dev, flow);
+ shared = mlx5_flow_get_shared_rss(flow);
+ if (shared)
+ rte_atomic32_dec(&shared->refcnt);
if (flow->counter) {
flow_dv_counter_release(dev, flow->counter);
flow->counter = 0;
@@ -9472,6 +9607,419 @@ __flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
}
}
+/**
+ * Release array of hash RX queue objects.
+ * Helper function.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in, out] hrxqs
+ * Array of hash RX queue objects.
+ *
+ * @return
+ * Total number of references to hash RX queue objects in *hrxqs* array
+ * after this operation.
+ */
+static int
+__flow_dv_hrxqs_release(struct rte_eth_dev *dev,
+ uint32_t (*hrxqs)[MLX5_RSS_HASH_FIELDS_LEN])
+{
+ size_t i;
+ int remaining = 0, ret = 0, ret_tunnel = 0;
+
+ for (i = 0; i < RTE_DIM(*hrxqs); i++) {
+ ret = mlx5_hrxq_release(dev, (*hrxqs)[i]);
+ if (!ret)
+ (*hrxqs)[i] = 0;
+ remaining += ret + ret_tunnel;
+ }
+ return remaining;
+}
+
+/**
+ * Release all hash RX queue objects representing shared RSS action.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in, out] action
+ * Shared RSS action to remove hash RX queue objects from.
+ *
+ * @return
+ * Total number of references to hash RX queue objects stored in *action*
+ * after this operation.
+ * Expected to be 0 if no external references held.
+ */
+static int
+__flow_dv_action_rss_hrxqs_release(struct rte_eth_dev *dev,
+ struct mlx5_shared_action_rss *action)
+{
+ return __flow_dv_hrxqs_release(dev, &action->hrxq) +
+ __flow_dv_hrxqs_release(dev, &action->hrxq_tunnel);
+}
+
+/**
+ * Setup shared RSS action.
+ * Prepare set of hash RX queue objects sufficient to handle all valid
+ * hash_fields combinations (see enum ibv_rx_hash_fields).
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in, out] action
+ * Partially initialized shared RSS action.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialized in case of
+ * error only.
+ *
+ * @return
+ * 0 on success, otherwise negative errno value.
+ */
+static int
+__flow_dv_action_rss_setup(struct rte_eth_dev *dev,
+ struct mlx5_shared_action_rss *action,
+ struct rte_flow_error *error)
+{
+ size_t i;
+ int err;
+
+ for (i = 0; i < MLX5_RSS_HASH_FIELDS_LEN; i++) {
+ uint32_t hrxq_idx;
+ uint64_t hash_fields = mlx5_rss_hash_fields[i];
+ int tunnel;
+
+ for (tunnel = 0; tunnel < 2; tunnel++) {
+ hrxq_idx = mlx5_hrxq_new(dev, action->origin.key,
+ MLX5_RSS_HASH_KEY_LEN,
+ hash_fields,
+ action->origin.queue,
+ action->origin.queue_num,
+ tunnel);
+ if (!hrxq_idx) {
+ rte_flow_error_set
+ (error, rte_errno,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "cannot get hash queue");
+ goto error_hrxq_new;
+ }
+ err = __flow_dv_action_rss_hrxq_set
+ (action, hash_fields, tunnel, hrxq_idx);
+ MLX5_ASSERT(!err);
+ }
+ }
+ return 0;
+error_hrxq_new:
+ err = rte_errno;
+ __flow_dv_action_rss_hrxqs_release(dev, action);
+ rte_errno = err;
+ return -rte_errno;
+}
+
+/**
+ * Create shared RSS action.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in] conf
+ * Shared action configuration.
+ * @param[in] rss
+ * RSS action specification used to create shared action.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialized in case of
+ * error only.
+ *
+ * @return
+ * A valid shared action handle in case of success, NULL otherwise and
+ * rte_errno is set.
+ */
+static struct rte_flow_shared_action *
+__flow_dv_action_rss_create(struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action_rss *rss,
+ struct rte_flow_error *error)
+{
+ struct rte_flow_shared_action *shared_action = NULL;
+ void *queue = NULL;
+ uint32_t queue_size;
+ struct mlx5_shared_action_rss *shared_rss;
+ struct rte_flow_action_rss *origin;
+ const uint8_t *rss_key;
+
+ (void)conf;
+ queue_size = RTE_ALIGN_CEIL(rss->queue_num * sizeof(uint16_t),
+ sizeof(void *));
+ queue = rte_calloc(__func__, 1, queue_size, 0);
+ shared_action = rte_calloc(__func__, 1, sizeof(*shared_action), 0);
+ if (!shared_action || !queue) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "cannot allocate resource memory");
+ goto error_rss_init;
+ }
+ shared_rss = &shared_action->rss;
+ shared_rss->queue = queue;
+ origin = &shared_rss->origin;
+ origin->func = rss->func;
+ origin->level = rss->level;
+ /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
+ origin->types = !rss->types ? ETH_RSS_IP : rss->types;
+ /* NULL RSS key indicates default RSS key. */
+ rss_key = !rss->key ? rss_hash_default_key : rss->key;
+ rte_memcpy(shared_rss->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
+ origin->key = &shared_rss->key[0];
+ origin->key_len = MLX5_RSS_HASH_KEY_LEN;
+ rte_memcpy(shared_rss->queue, rss->queue, queue_size);
+ origin->queue = shared_rss->queue;
+ origin->queue_num = rss->queue_num;
+ if (__flow_dv_action_rss_setup(dev, shared_rss, error))
+ goto error_rss_init;
+ shared_action->type = MLX5_FLOW_ACTION_SHARED_RSS;
+ return shared_action;
+error_rss_init:
+ rte_free(shared_action);
+ rte_free(queue);
+ return NULL;
+}
+
+/**
+ * Destroy the shared RSS action.
+ * Release related hash RX queue objects.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in] shared_rss
+ * The shared RSS action object to be removed.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialized in case of
+ * error only.
+ *
+ * @return
+ * 0 on success, otherwise negative errno value.
+ */
+static int
+__flow_dv_action_rss_release(struct rte_eth_dev *dev,
+ struct mlx5_shared_action_rss *shared_rss,
+ struct rte_flow_error *error)
+{
+ struct rte_flow_shared_action *shared_action = NULL;
+ int remaining = __flow_dv_action_rss_hrxqs_release(dev, shared_rss);
+
+ if (remaining) {
+ return rte_flow_error_set(error, ETOOMANYREFS,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL,
+ "shared rss hrxq has references");
+ }
+ shared_action = container_of(shared_rss,
+ struct rte_flow_shared_action, rss);
+ if (!rte_atomic32_dec_and_test(&shared_action->refcnt)) {
+ return rte_flow_error_set(error, ETOOMANYREFS,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL,
+ "shared rss has references");
+ }
+ rte_free(shared_rss->queue);
+ return 0;
+}
+
+/**
+ * Create shared action, lock free,
+ * (mutex should be acquired by caller).
+ * Dispatcher for action type specific call.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in] conf
+ * Shared action configuration.
+ * @param[in] action
+ * Action specification used to create shared action.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialized in case of
+ * error only.
+ *
+ * @return
+ * A valid shared action handle in case of success, NULL otherwise and
+ * rte_errno is set.
+ */
+static struct rte_flow_shared_action *
+__flow_dv_action_create(struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
+{
+ struct rte_flow_shared_action *shared_action = NULL;
+ struct mlx5_priv *priv = dev->data->dev_private;
+
+ switch (action->type) {
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ shared_action = __flow_dv_action_rss_create(dev, conf,
+ action->conf,
+ error);
+ break;
+ default:
+ rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "action type not supported");
+ break;
+ }
+ if (shared_action) {
+ rte_atomic32_inc(&shared_action->refcnt);
+ LIST_INSERT_HEAD(&priv->shared_actions, shared_action, next);
+ }
+ return shared_action;
+}
+
+/**
+ * Destroy the shared action.
+ * Release action related resources on the NIC and the memory.
+ * Lock free, (mutex should be acquired by caller).
+ * Dispatcher for action type specific call.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in] action
+ * The shared action object to be removed.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialized in case of
+ * error only.
+ *
+ * @return
+ * 0 on success, otherwise negative errno value.
+ */
+static int
+__flow_dv_action_destroy(struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *action,
+ struct rte_flow_error *error)
+{
+ int ret;
+
+ switch (action->type) {
+ case MLX5_FLOW_ACTION_SHARED_RSS:
+ ret = __flow_dv_action_rss_release(dev, &action->rss, error);
+ break;
+ default:
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL,
+ "action type not supported");
+ }
+ if (ret)
+ return ret;
+ LIST_REMOVE(action, next);
+ rte_free(action);
+ return 0;
+}
+
+/**
+ * Updates in place shared RSS action configuration.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in] shared_rss
+ * The shared RSS action object to be updated.
+ * @param[in] action_conf
+ * RSS action specification used to modify *shared_rss*.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialized in case of
+ * error only.
+ *
+ * @return
+ * 0 on success, otherwise negative errno value.
+ * @note: currently only support update of RSS queues.
+ */
+static int
+__flow_dv_action_rss_update(struct rte_eth_dev *dev,
+ struct mlx5_shared_action_rss *shared_rss,
+ const struct rte_flow_action_rss *action_conf,
+ struct rte_flow_error *error)
+{
+ size_t i;
+ int ret;
+ void *queue = NULL;
+ uint32_t queue_size;
+ const uint8_t *rss_key;
+ uint32_t rss_key_len;
+
+ queue_size = RTE_ALIGN_CEIL(action_conf->queue_num * sizeof(uint16_t),
+ sizeof(void *));
+ queue = rte_calloc(__func__, 1, queue_size, 0);
+ if (!queue) {
+ return rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "cannot allocate resource memory");
+ }
+ if (action_conf->key) {
+ rss_key = action_conf->key;
+ rss_key_len = action_conf->key_len;
+ } else {
+ rss_key = rss_hash_default_key;
+ rss_key_len = MLX5_RSS_HASH_KEY_LEN;
+ }
+ for (i = 0; i < MLX5_RSS_HASH_FIELDS_LEN; i++) {
+ uint32_t hrxq_idx;
+ uint64_t hash_fields = mlx5_rss_hash_fields[i];
+ int tunnel;
+
+ for (tunnel = 0; tunnel < 2; tunnel++) {
+ hrxq_idx = __flow_dv_action_rss_hrxq_lookup
+ (shared_rss, hash_fields, tunnel);
+ MLX5_ASSERT(hrxq_idx);
+ ret = mlx5_hrxq_modify
+ (dev, hrxq_idx,
+ rss_key, rss_key_len,
+ hash_fields,
+ action_conf->queue, action_conf->queue_num);
+ if (ret) {
+ rte_free(queue);
+ return rte_flow_error_set
+ (error, rte_errno,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "cannot update hash queue");
+ }
+ }
+ }
+ rte_free(shared_rss->queue);
+ shared_rss->queue = queue;
+ rte_memcpy(shared_rss->queue, action_conf->queue, queue_size);
+ shared_rss->origin.queue = shared_rss->queue;
+ shared_rss->origin.queue_num = action_conf->queue_num;
+ return 0;
+}
+
+/**
+ * Updates in place shared action configuration, lock free,
+ * (mutex should be acquired by caller).
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in] action
+ * The shared action object to be updated.
+ * @param[in] action_conf
+ * Action specification used to modify *action*.
+ * *action_conf* should be of type correlating with type of the *action*,
+ * otherwise considered as invalid.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialized in case of
+ * error only.
+ *
+ * @return
+ * 0 on success, otherwise negative errno value.
+ */
+static int
+__flow_dv_action_update(struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *action,
+ const void *action_conf,
+ struct rte_flow_error *error)
+{
+ switch (action->type) {
+ case MLX5_FLOW_ACTION_SHARED_RSS:
+ return __flow_dv_action_rss_update(dev, &action->rss,
+ action_conf, error);
+ default:
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL,
+ "action type not supported");
+ }
+}
/**
* Query a dv flow rule for its statistics via devx.
*
@@ -10150,6 +10698,92 @@ flow_dv_counter_free(struct rte_eth_dev *dev, uint32_t cnt)
flow_dv_shared_unlock(dev);
}
+/**
+ * Validate shared action.
+ * Dispatcher for action type specific validation.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in] conf
+ * Shared action configuration.
+ * @param[in] action
+ * The shared action object to validate.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialized in case of
+ * error only.
+ *
+ * @return
+ * 0 on success, otherwise negative errno value.
+ */
+static int
+flow_dv_action_validate(struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
+{
+ (void)conf;
+ switch (action->type) {
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ return mlx5_validate_action_rss(dev, action, error);
+ default:
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL,
+ "action type not supported");
+ }
+}
+
+/*
+ * Mutex-protected thunk to lock-free __flow_dv_action_create().
+ */
+static struct rte_flow_shared_action *
+flow_dv_action_create(struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
+{
+ struct rte_flow_shared_action *shared_action = NULL;
+
+ flow_dv_shared_lock(dev);
+ shared_action = __flow_dv_action_create(dev, conf, action, error);
+ flow_dv_shared_unlock(dev);
+ return shared_action;
+}
+
+/*
+ * Mutex-protected thunk to lock-free __flow_dv_action_destroy().
+ */
+static int
+flow_dv_action_destroy(struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *action,
+ struct rte_flow_error *error)
+{
+ int ret;
+
+ flow_dv_shared_lock(dev);
+ ret = __flow_dv_action_destroy(dev, action, error);
+ flow_dv_shared_unlock(dev);
+ return ret;
+}
+
+/*
+ * Mutex-protected thunk to lock-free __flow_dv_action_update().
+ */
+static int
+flow_dv_action_update(struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *action,
+ const void *action_conf,
+ struct rte_flow_error *error)
+{
+ int ret;
+
+ flow_dv_shared_lock(dev);
+ ret = __flow_dv_action_update(dev, action, action_conf,
+ error);
+ flow_dv_shared_unlock(dev);
+ return ret;
+}
+
const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = {
.validate = flow_dv_validate,
.prepare = flow_dv_prepare,
@@ -10166,6 +10800,10 @@ const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = {
.counter_free = flow_dv_counter_free,
.counter_query = flow_dv_counter_query,
.get_aged_flows = flow_get_aged_flows,
+ .action_validate = flow_dv_action_validate,
+ .action_create = flow_dv_action_create,
+ .action_destroy = flow_dv_action_destroy,
+ .action_update = flow_dv_action_update,
};
#endif /* HAVE_IBV_FLOW_DV_SUPPORT */
--
2.26.2
^ permalink raw reply [flat|nested] 19+ messages in thread
* [dpdk-dev] [PATCH v2 0/4] Shared action RSS PMD impl
2020-10-08 12:18 [dpdk-dev] [PATCH 0/4] Shared action RSS PMD impl Andrey Vesnovaty
` (3 preceding siblings ...)
2020-10-08 12:18 ` [dpdk-dev] [PATCH 4/4] net/mlx5: driver support for shared action Andrey Vesnovaty
@ 2020-10-23 10:24 ` Andrey Vesnovaty
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 1/4] common/mlx5: modify advanced Rx object via DevX Andrey Vesnovaty
` (4 more replies)
4 siblings, 5 replies; 19+ messages in thread
From: Andrey Vesnovaty @ 2020-10-23 10:24 UTC (permalink / raw)
To: dev
Cc: jer, jerinjacobk, thomas, ferruh.yigit, stephen,
bruce.richardson, orika, viacheslavo, andrey.vesnovaty, mdr,
nhorman, ajit.khaparde, samik.gupta
This patchset introduces Mellanox PMD implementation for shared RSS
action. It was part of the 'RTE flow shared action API' patchset [1].
After v3 the ptchset was split to RTE flow layer [2] and PMD
implementation (this patchset).
PMD implementation of this patchset is based on RTE flow API [3].
v2 changes (v1 was a draft):
* lots fo cosmetic changes
* fix spelling/rephrases in comments and commit messages
* fix code styling issues
* code cleanups
* bugfix: prevent non shared action modification
[1] RTE flow shared action API v1
http://inbox.dpdk.org/dev/20200702120511.16315-1-andreyv@mellanox.com/
[2] RTE flow shared action API v4
http://inbox.dpdk.org/dev/20201006200835.30017-1-andreyv@nvidia.com/
[3] RTE flow shared action API v8
http://inbox.dpdk.org/dev/20201014114015.17197-1-andreyv@nvidia.com/
Andrey Vesnovaty (4):
common/mlx5: modify advanced Rx object via DevX
net/mlx5: modify hash Rx queue objects
net/mlx5: shared action PMD
net/mlx5: driver support for shared action
drivers/common/mlx5/mlx5_devx_cmds.c | 84 ++++
drivers/common/mlx5/mlx5_devx_cmds.h | 10 +
drivers/common/mlx5/mlx5_prm.h | 29 ++
drivers/common/mlx5/version.map | 1 +
drivers/net/mlx5/mlx5.c | 1 +
drivers/net/mlx5/mlx5.h | 7 +
drivers/net/mlx5/mlx5_defs.h | 3 +
drivers/net/mlx5/mlx5_devx.c | 151 ++++--
drivers/net/mlx5/mlx5_flow.c | 499 +++++++++++++++++--
drivers/net/mlx5/mlx5_flow.h | 86 ++++
drivers/net/mlx5/mlx5_flow_dv.c | 705 +++++++++++++++++++++++++--
drivers/net/mlx5/mlx5_flow_verbs.c | 3 +-
drivers/net/mlx5/mlx5_rxq.c | 110 ++++-
drivers/net/mlx5/mlx5_rxtx.h | 7 +-
14 files changed, 1596 insertions(+), 100 deletions(-)
--
2.26.2
^ permalink raw reply [flat|nested] 19+ messages in thread
* [dpdk-dev] [PATCH v2 1/4] common/mlx5: modify advanced Rx object via DevX
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 0/4] Shared action RSS PMD impl Andrey Vesnovaty
@ 2020-10-23 10:24 ` Andrey Vesnovaty
2020-10-23 14:16 ` Slava Ovsiienko
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 2/4] net/mlx5: modify hash Rx queue objects Andrey Vesnovaty
` (3 subsequent siblings)
4 siblings, 1 reply; 19+ messages in thread
From: Andrey Vesnovaty @ 2020-10-23 10:24 UTC (permalink / raw)
To: dev
Cc: jer, jerinjacobk, thomas, ferruh.yigit, stephen,
bruce.richardson, orika, viacheslavo, andrey.vesnovaty, mdr,
nhorman, ajit.khaparde, samik.gupta, Matan Azrad, Shahaf Shuler
Implement TIR modification (see mlx5_devx_cmd_modify_tir()) using DevX
API. TIR is the object containing the hashed table of Rx queue. The
functionality to configure/modify this HW-related object is prerequisite
to implement rete_flow_shared_action_update() for shared RSS action in
mlx5 PMD. HW-related structures for TIR modification add in mlx5_prm.h.
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
---
drivers/common/mlx5/mlx5_devx_cmds.c | 84 ++++++++++++++++++++++++++++
drivers/common/mlx5/mlx5_devx_cmds.h | 10 ++++
drivers/common/mlx5/mlx5_prm.h | 29 ++++++++++
drivers/common/mlx5/version.map | 1 +
4 files changed, 124 insertions(+)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index d3e90b5311..8aee12d527 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -1109,6 +1109,90 @@ mlx5_devx_cmd_create_tir(void *ctx,
return tir;
}
+/**
+ * Modify TIR using DevX API.
+ *
+ * @param[in] tir
+ * Pointer to TIR DevX object structure.
+ * @param [in] modify_tir_attr
+ * Pointer to TIR modification attributes structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_devx_cmd_modify_tir(struct mlx5_devx_obj *tir,
+ struct mlx5_devx_modify_tir_attr *modify_tir_attr)
+{
+ struct mlx5_devx_tir_attr *tir_attr = &modify_tir_attr->tir;
+ uint32_t in[MLX5_ST_SZ_DW(modify_tir_in)] = {0};
+ uint32_t out[MLX5_ST_SZ_DW(modify_tir_out)] = {0};
+ void *tir_ctx;
+ int ret;
+
+ MLX5_SET(modify_tir_in, in, opcode, MLX5_CMD_OP_MODIFY_TIR);
+ MLX5_SET(modify_tir_in, in, tirn, modify_tir_attr->tirn);
+ MLX5_SET64(modify_tir_in, in, modify_bitmask,
+ modify_tir_attr->modify_bitmask);
+ tir_ctx = MLX5_ADDR_OF(modify_rq_in, in, ctx);
+ if (modify_tir_attr->modify_bitmask &
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_LRO) {
+ MLX5_SET(tirc, tir_ctx, lro_timeout_period_usecs,
+ tir_attr->lro_timeout_period_usecs);
+ MLX5_SET(tirc, tir_ctx, lro_enable_mask,
+ tir_attr->lro_enable_mask);
+ MLX5_SET(tirc, tir_ctx, lro_max_msg_sz,
+ tir_attr->lro_max_msg_sz);
+ }
+ if (modify_tir_attr->modify_bitmask &
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_INDIRECT_TABLE)
+ MLX5_SET(tirc, tir_ctx, indirect_table,
+ tir_attr->indirect_table);
+ if (modify_tir_attr->modify_bitmask &
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_HASH) {
+ int i;
+ void *outer, *inner;
+
+ MLX5_SET(tirc, tir_ctx, rx_hash_symmetric,
+ tir_attr->rx_hash_symmetric);
+ MLX5_SET(tirc, tir_ctx, rx_hash_fn, tir_attr->rx_hash_fn);
+ for (i = 0; i < 10; i++) {
+ MLX5_SET(tirc, tir_ctx, rx_hash_toeplitz_key[i],
+ tir_attr->rx_hash_toeplitz_key[i]);
+ }
+ outer = MLX5_ADDR_OF(tirc, tir_ctx,
+ rx_hash_field_selector_outer);
+ MLX5_SET(rx_hash_field_select, outer, l3_prot_type,
+ tir_attr->rx_hash_field_selector_outer.l3_prot_type);
+ MLX5_SET(rx_hash_field_select, outer, l4_prot_type,
+ tir_attr->rx_hash_field_selector_outer.l4_prot_type);
+ MLX5_SET
+ (rx_hash_field_select, outer, selected_fields,
+ tir_attr->rx_hash_field_selector_outer.selected_fields);
+ inner = MLX5_ADDR_OF(tirc, tir_ctx,
+ rx_hash_field_selector_inner);
+ MLX5_SET(rx_hash_field_select, inner, l3_prot_type,
+ tir_attr->rx_hash_field_selector_inner.l3_prot_type);
+ MLX5_SET(rx_hash_field_select, inner, l4_prot_type,
+ tir_attr->rx_hash_field_selector_inner.l4_prot_type);
+ MLX5_SET
+ (rx_hash_field_select, inner, selected_fields,
+ tir_attr->rx_hash_field_selector_inner.selected_fields);
+ }
+ if (modify_tir_attr->modify_bitmask &
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_SELF_LB_EN) {
+ MLX5_SET(tirc, tir_ctx, self_lb_block, tir_attr->self_lb_block);
+ }
+ ret = mlx5_glue->devx_obj_modify(tir->obj, in, sizeof(in),
+ out, sizeof(out));
+ if (ret) {
+ DRV_LOG(ERR, "Failed to modify TIR using DevX");
+ rte_errno = errno;
+ return -errno;
+ }
+ return ret;
+}
+
/**
* Create RQT using DevX API.
*
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index c2d089cab8..abbea67784 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -192,6 +192,13 @@ struct mlx5_devx_tir_attr {
struct mlx5_rx_hash_field_select rx_hash_field_selector_inner;
};
+/* TIR attributes structure, used by TIR modify. */
+struct mlx5_devx_modify_tir_attr {
+ uint32_t tirn:24;
+ uint64_t modify_bitmask;
+ struct mlx5_devx_tir_attr tir;
+};
+
/* RQT attributes structure, used by RQT operations. */
struct mlx5_devx_rqt_attr {
uint8_t rq_type;
@@ -436,6 +443,9 @@ __rte_internal
int mlx5_devx_cmd_modify_rqt(struct mlx5_devx_obj *rqt,
struct mlx5_devx_rqt_attr *rqt_attr);
__rte_internal
+int mlx5_devx_cmd_modify_tir(struct mlx5_devx_obj *tir,
+ struct mlx5_devx_modify_tir_attr *tir_attr);
+__rte_internal
int mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
uint32_t ids[], uint32_t num);
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index b8b656a53f..d342263c85 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -830,6 +830,7 @@ enum {
MLX5_CMD_OP_ACCESS_REGISTER = 0x805,
MLX5_CMD_OP_ALLOC_TRANSPORT_DOMAIN = 0x816,
MLX5_CMD_OP_CREATE_TIR = 0x900,
+ MLX5_CMD_OP_MODIFY_TIR = 0x901,
MLX5_CMD_OP_CREATE_SQ = 0X904,
MLX5_CMD_OP_MODIFY_SQ = 0X905,
MLX5_CMD_OP_CREATE_RQ = 0x908,
@@ -1919,6 +1920,34 @@ struct mlx5_ifc_create_tir_in_bits {
struct mlx5_ifc_tirc_bits ctx;
};
+enum {
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_LRO = 1ULL << 0,
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_INDIRECT_TABLE = 1ULL << 1,
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_HASH = 1ULL << 2,
+ /* bit 3 - tunneled_offload_en modify not supported. */
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_SELF_LB_EN = 1ULL << 4,
+};
+
+struct mlx5_ifc_modify_tir_out_bits {
+ u8 status[0x8];
+ u8 reserved_at_8[0x18];
+ u8 syndrome[0x20];
+ u8 reserved_at_40[0x40];
+};
+
+struct mlx5_ifc_modify_tir_in_bits {
+ u8 opcode[0x10];
+ u8 uid[0x10];
+ u8 reserved_at_20[0x10];
+ u8 op_mod[0x10];
+ u8 reserved_at_40[0x8];
+ u8 tirn[0x18];
+ u8 reserved_at_60[0x20];
+ u8 modify_bitmask[0x40];
+ u8 reserved_at_c0[0x40];
+ struct mlx5_ifc_tirc_bits ctx;
+};
+
enum {
MLX5_INLINE_Q_TYPE_RQ = 0x0,
MLX5_INLINE_Q_TYPE_VIRTQ = 0x1,
diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map
index c4d57c08a7..884001ca7d 100644
--- a/drivers/common/mlx5/version.map
+++ b/drivers/common/mlx5/version.map
@@ -30,6 +30,7 @@ INTERNAL {
mlx5_devx_cmd_modify_rq;
mlx5_devx_cmd_modify_rqt;
mlx5_devx_cmd_modify_sq;
+ mlx5_devx_cmd_modify_tir;
mlx5_devx_cmd_modify_virtq;
mlx5_devx_cmd_qp_query_tis_td;
mlx5_devx_cmd_query_hca_attr;
--
2.26.2
^ permalink raw reply [flat|nested] 19+ messages in thread
* [dpdk-dev] [PATCH v2 2/4] net/mlx5: modify hash Rx queue objects
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 0/4] Shared action RSS PMD impl Andrey Vesnovaty
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 1/4] common/mlx5: modify advanced Rx object via DevX Andrey Vesnovaty
@ 2020-10-23 10:24 ` Andrey Vesnovaty
2020-10-23 14:17 ` Slava Ovsiienko
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 3/4] net/mlx5: shared action PMD Andrey Vesnovaty
` (2 subsequent siblings)
4 siblings, 1 reply; 19+ messages in thread
From: Andrey Vesnovaty @ 2020-10-23 10:24 UTC (permalink / raw)
To: dev
Cc: jer, jerinjacobk, thomas, ferruh.yigit, stephen,
bruce.richardson, orika, viacheslavo, andrey.vesnovaty, mdr,
nhorman, ajit.khaparde, samik.gupta, Matan Azrad, Shahaf Shuler
Implement modification for hashed table of Rx queue object (see
mlx5_hrxq_modify()). This implementation relies on the capability to
modify TIR object via DevX API, i.e. current implementation doesn't
support verbs HW object operations. The functionality to modify hashed
table of Rx queue object is prerequisite to implement
rete_flow_shared_action_update() for shared RSS action in mlx5 PMD.
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
---
drivers/net/mlx5/mlx5.h | 5 +
drivers/net/mlx5/mlx5_devx.c | 151 ++++++++++++++++++++++-------
drivers/net/mlx5/mlx5_flow_dv.c | 19 ++--
drivers/net/mlx5/mlx5_flow_verbs.c | 3 +-
drivers/net/mlx5/mlx5_rxq.c | 110 ++++++++++++++++++++-
drivers/net/mlx5/mlx5_rxtx.h | 7 +-
6 files changed, 245 insertions(+), 50 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index c9d5d71630..9be3061165 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -727,6 +727,7 @@ struct mlx5_ind_table_obj {
struct mlx5_hrxq {
ILIST_ENTRY(uint32_t)next; /* Index to the next element. */
rte_atomic32_t refcnt; /* Reference counter. */
+ uint8_t shared:1; /* This object used in shared action. */
struct mlx5_ind_table_obj *ind_table; /* Indirection table. */
RTE_STD_C11
union {
@@ -798,6 +799,10 @@ struct mlx5_obj_ops {
void (*ind_table_destroy)(struct mlx5_ind_table_obj *ind_tbl);
int (*hrxq_new)(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
int tunnel __rte_unused);
+ int (*hrxq_modify)(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
+ const uint8_t *rss_key,
+ uint64_t hash_fields,
+ const struct mlx5_ind_table_obj *ind_tbl);
void (*hrxq_destroy)(struct mlx5_hrxq *hrxq);
int (*drop_action_create)(struct rte_eth_dev *dev);
void (*drop_action_destroy)(struct rte_eth_dev *dev);
diff --git a/drivers/net/mlx5/mlx5_devx.c b/drivers/net/mlx5/mlx5_devx.c
index 0c99fe7519..5fce4cd555 100644
--- a/drivers/net/mlx5/mlx5_devx.c
+++ b/drivers/net/mlx5/mlx5_devx.c
@@ -738,33 +738,37 @@ mlx5_devx_ind_table_destroy(struct mlx5_ind_table_obj *ind_tbl)
}
/**
- * Create an Rx Hash queue.
+ * Set TIR attribute struct with relevant input values.
*
- * @param dev
+ * @param[in] dev
* Pointer to Ethernet device.
- * @param hrxq
- * Pointer to Rx Hash queue.
- * @param tunnel
+ * @param[in] rss_key
+ * RSS key for the Rx hash queue.
+ * @param[in] hash_fields
+ * Verbs protocol hash field to make the RSS on.
+ * @param[in] ind_tbl
+ * Indirection table for TIR.
+ * @param[in] tunnel
* Tunnel type.
+ * @param[out] tir_attr
+ * Parameters structure for TIR creation/modification.
*
* @return
- * 0 on success, a negative errno value otherwise and rte_errno is set.
+ * The Verbs/DevX object initialised index, 0 otherwise and rte_errno is set.
*/
-static int
-mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
- int tunnel __rte_unused)
+static void
+mlx5_devx_tir_attr_set(struct rte_eth_dev *dev, const uint8_t *rss_key,
+ uint64_t hash_fields,
+ const struct mlx5_ind_table_obj *ind_tbl,
+ int tunnel, struct mlx5_devx_tir_attr *tir_attr)
{
struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_ind_table_obj *ind_tbl = hrxq->ind_table;
struct mlx5_rxq_data *rxq_data = (*priv->rxqs)[ind_tbl->queues[0]];
struct mlx5_rxq_ctrl *rxq_ctrl =
container_of(rxq_data, struct mlx5_rxq_ctrl, rxq);
- struct mlx5_devx_tir_attr tir_attr;
- const uint8_t *rss_key = hrxq->rss_key;
- uint64_t hash_fields = hrxq->hash_fields;
+ enum mlx5_rxq_type rxq_obj_type = rxq_ctrl->type;
bool lro = true;
uint32_t i;
- int err;
/* Enable TIR LRO only if all the queues were configured for. */
for (i = 0; i < ind_tbl->queues_n; ++i) {
@@ -773,26 +777,24 @@ mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
break;
}
}
- memset(&tir_attr, 0, sizeof(tir_attr));
- tir_attr.disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
- tir_attr.rx_hash_fn = MLX5_RX_HASH_FN_TOEPLITZ;
- tir_attr.tunneled_offload_en = !!tunnel;
+ memset(tir_attr, 0, sizeof(*tir_attr));
+ tir_attr->disp_type = MLX5_TIRC_DISP_TYPE_INDIRECT;
+ tir_attr->rx_hash_fn = MLX5_RX_HASH_FN_TOEPLITZ;
+ tir_attr->tunneled_offload_en = !!tunnel;
/* If needed, translate hash_fields bitmap to PRM format. */
if (hash_fields) {
- struct mlx5_rx_hash_field_select *rx_hash_field_select = NULL;
+ struct mlx5_rx_hash_field_select *rx_hash_field_select =
#ifdef HAVE_IBV_DEVICE_TUNNEL_SUPPORT
- rx_hash_field_select = hash_fields & IBV_RX_HASH_INNER ?
- &tir_attr.rx_hash_field_selector_inner :
- &tir_attr.rx_hash_field_selector_outer;
-#else
- rx_hash_field_select = &tir_attr.rx_hash_field_selector_outer;
+ hash_fields & IBV_RX_HASH_INNER ?
+ &tir_attr->rx_hash_field_selector_inner :
#endif
+ &tir_attr->rx_hash_field_selector_outer;
/* 1 bit: 0: IPv4, 1: IPv6. */
rx_hash_field_select->l3_prot_type =
!!(hash_fields & MLX5_IPV6_IBV_RX_HASH);
/* 1 bit: 0: TCP, 1: UDP. */
rx_hash_field_select->l4_prot_type =
- !!(hash_fields & MLX5_UDP_IBV_RX_HASH);
+ !!(hash_fields & MLX5_UDP_IBV_RX_HASH);
/* Bitmask which sets which fields to use in RX Hash. */
rx_hash_field_select->selected_fields =
((!!(hash_fields & MLX5_L3_SRC_IBV_RX_HASH)) <<
@@ -804,20 +806,47 @@ mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
(!!(hash_fields & MLX5_L4_DST_IBV_RX_HASH)) <<
MLX5_RX_HASH_FIELD_SELECT_SELECTED_FIELDS_L4_DPORT;
}
- if (rxq_ctrl->type == MLX5_RXQ_TYPE_HAIRPIN)
- tir_attr.transport_domain = priv->sh->td->id;
+ if (rxq_obj_type == MLX5_RXQ_TYPE_HAIRPIN)
+ tir_attr->transport_domain = priv->sh->td->id;
else
- tir_attr.transport_domain = priv->sh->tdn;
- memcpy(tir_attr.rx_hash_toeplitz_key, rss_key, MLX5_RSS_HASH_KEY_LEN);
- tir_attr.indirect_table = ind_tbl->rqt->id;
+ tir_attr->transport_domain = priv->sh->tdn;
+ memcpy(tir_attr->rx_hash_toeplitz_key, rss_key, MLX5_RSS_HASH_KEY_LEN);
+ tir_attr->indirect_table = ind_tbl->rqt->id;
if (dev->data->dev_conf.lpbk_mode)
- tir_attr.self_lb_block = MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
+ tir_attr->self_lb_block =
+ MLX5_TIRC_SELF_LB_BLOCK_BLOCK_UNICAST;
if (lro) {
- tir_attr.lro_timeout_period_usecs = priv->config.lro.timeout;
- tir_attr.lro_max_msg_sz = priv->max_lro_msg_size;
- tir_attr.lro_enable_mask = MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO |
- MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO;
+ tir_attr->lro_timeout_period_usecs = priv->config.lro.timeout;
+ tir_attr->lro_max_msg_sz = priv->max_lro_msg_size;
+ tir_attr->lro_enable_mask =
+ MLX5_TIRC_LRO_ENABLE_MASK_IPV4_LRO |
+ MLX5_TIRC_LRO_ENABLE_MASK_IPV6_LRO;
}
+}
+
+/**
+ * Create an Rx Hash queue.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param hrxq
+ * Pointer to Rx Hash queue.
+ * @param tunnel
+ * Tunnel type.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_devx_hrxq_new(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
+ int tunnel __rte_unused)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_devx_tir_attr tir_attr = {0};
+ int err;
+
+ mlx5_devx_tir_attr_set(dev, hrxq->rss_key, hrxq->hash_fields,
+ hrxq->ind_table, tunnel, &tir_attr);
hrxq->tir = mlx5_devx_cmd_create_tir(priv->sh->ctx, &tir_attr);
if (!hrxq->tir) {
DRV_LOG(ERR, "Port %u cannot create DevX TIR.",
@@ -854,6 +883,57 @@ mlx5_devx_tir_destroy(struct mlx5_hrxq *hrxq)
claim_zero(mlx5_devx_cmd_destroy(hrxq->tir));
}
+/**
+ * Modify an Rx Hash queue configuration.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param hrxq
+ * Hash Rx queue to modify.
+ * @param rss_key
+ * RSS key for the Rx hash queue.
+ * @param hash_fields
+ * Verbs protocol hash field to make the RSS on.
+ * @param[in] ind_tbl
+ * Indirection table for TIR.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_devx_hrxq_modify(struct rte_eth_dev *dev, struct mlx5_hrxq *hrxq,
+ const uint8_t *rss_key,
+ uint64_t hash_fields,
+ const struct mlx5_ind_table_obj *ind_tbl)
+{
+ struct mlx5_devx_modify_tir_attr modify_tir = {0};
+
+ /*
+ * untested for modification fields:
+ * - rx_hash_symmetric not set in hrxq_new(),
+ * - rx_hash_fn set hard-coded in hrxq_new(),
+ * - lro_xxx not set after rxq setup
+ */
+ if (ind_tbl != hrxq->ind_table)
+ modify_tir.modify_bitmask |=
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_INDIRECT_TABLE;
+ if (hash_fields != hrxq->hash_fields ||
+ memcmp(hrxq->rss_key, rss_key, MLX5_RSS_HASH_KEY_LEN))
+ modify_tir.modify_bitmask |=
+ MLX5_MODIFY_TIR_IN_MODIFY_BITMASK_HASH;
+ mlx5_devx_tir_attr_set(dev, rss_key, hash_fields, ind_tbl,
+ 0, /* N/A - tunnel modification unsupported */
+ &modify_tir.tir);
+ modify_tir.tirn = hrxq->tir->id;
+ if (mlx5_devx_cmd_modify_tir(hrxq->tir, &modify_tir)) {
+ DRV_LOG(ERR, "port %u cannot modify DevX TIR",
+ dev->data->port_id);
+ rte_errno = errno;
+ return -rte_errno;
+ }
+ return 0;
+}
+
/**
* Create a DevX drop action for Rx Hash queue.
*
@@ -1364,6 +1444,7 @@ struct mlx5_obj_ops devx_obj_ops = {
.ind_table_destroy = mlx5_devx_ind_table_destroy,
.hrxq_new = mlx5_devx_hrxq_new,
.hrxq_destroy = mlx5_devx_tir_destroy,
+ .hrxq_modify = mlx5_devx_hrxq_modify,
.drop_action_create = mlx5_devx_drop_action_create,
.drop_action_destroy = mlx5_devx_drop_action_destroy,
.txq_obj_new = mlx5_txq_devx_obj_new,
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 15cd34e331..2bac7dac9b 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -8425,20 +8425,16 @@ flow_dv_handle_rx_queue(struct rte_eth_dev *dev,
struct mlx5_hrxq *hrxq;
MLX5_ASSERT(rss_desc->queue_num);
- *hrxq_idx = mlx5_hrxq_get(dev, rss_desc->key,
- MLX5_RSS_HASH_KEY_LEN,
+ *hrxq_idx = mlx5_hrxq_get(dev, rss_desc->key, MLX5_RSS_HASH_KEY_LEN,
dev_flow->hash_fields,
- rss_desc->queue,
- rss_desc->queue_num);
+ rss_desc->queue, rss_desc->queue_num);
if (!*hrxq_idx) {
*hrxq_idx = mlx5_hrxq_new
- (dev, rss_desc->key,
- MLX5_RSS_HASH_KEY_LEN,
+ (dev, rss_desc->key, MLX5_RSS_HASH_KEY_LEN,
dev_flow->hash_fields,
- rss_desc->queue,
- rss_desc->queue_num,
- !!(dh->layers &
- MLX5_FLOW_LAYER_TUNNEL));
+ rss_desc->queue, rss_desc->queue_num,
+ !!(dh->layers & MLX5_FLOW_LAYER_TUNNEL),
+ false);
if (!*hrxq_idx)
return NULL;
}
@@ -10026,7 +10022,8 @@ __flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
rss_desc->queue,
rss_desc->queue_num,
!!(dh->layers &
- MLX5_FLOW_LAYER_TUNNEL));
+ MLX5_FLOW_LAYER_TUNNEL),
+ false);
}
hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ],
hrxq_idx);
diff --git a/drivers/net/mlx5/mlx5_flow_verbs.c b/drivers/net/mlx5/mlx5_flow_verbs.c
index 710622ce92..9cc4410667 100644
--- a/drivers/net/mlx5/mlx5_flow_verbs.c
+++ b/drivers/net/mlx5/mlx5_flow_verbs.c
@@ -1984,7 +1984,8 @@ flow_verbs_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
rss_desc->queue,
rss_desc->queue_num,
!!(handle->layers &
- MLX5_FLOW_LAYER_TUNNEL));
+ MLX5_FLOW_LAYER_TUNNEL),
+ false);
hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ],
hrxq_idx);
if (!hrxq) {
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index ca1625eac6..0176ecef76 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1723,6 +1723,29 @@ mlx5_rxq_get_type(struct rte_eth_dev *dev, uint16_t idx)
return MLX5_RXQ_TYPE_UNDEFINED;
}
+/**
+ * Match queues listed in arguments to queues contained in indirection table
+ * object.
+ *
+ * @param ind_tbl
+ * Pointer to indirection table to match.
+ * @param queues
+ * Queues to match to ques in indirection table.
+ * @param queues_n
+ * Number of queues in the array.
+ *
+ * @return
+ * 1 if all queues in indirection table match 0 othrwise.
+ */
+static int
+mlx5_ind_table_obj_match_queues(const struct mlx5_ind_table_obj *ind_tbl,
+ const uint16_t *queues, uint32_t queues_n)
+{
+ return (ind_tbl->queues_n == queues_n) &&
+ (!memcmp(ind_tbl->queues, queues,
+ ind_tbl->queues_n * sizeof(ind_tbl->queues[0])));
+}
+
/**
* Get an indirection table.
*
@@ -1900,6 +1923,8 @@ mlx5_hrxq_get(struct rte_eth_dev *dev,
hrxq, next) {
struct mlx5_ind_table_obj *ind_tbl;
+ if (hrxq->shared)
+ continue;
if (hrxq->rss_key_len != rss_key_len)
continue;
if (memcmp(hrxq->rss_key, rss_key, rss_key_len))
@@ -1919,6 +1944,86 @@ mlx5_hrxq_get(struct rte_eth_dev *dev,
return 0;
}
+/**
+ * Modify an Rx Hash queue configuration.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ * @param hrxq
+ * Index to Hash Rx queue to modify.
+ * @param rss_key
+ * RSS key for the Rx hash queue.
+ * @param rss_key_len
+ * RSS key length.
+ * @param hash_fields
+ * Verbs protocol hash field to make the RSS on.
+ * @param queues
+ * Queues entering in hash queue. In case of empty hash_fields only the
+ * first queue index will be taken for the indirection table.
+ * @param queues_n
+ * Number of queues.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hrxq_idx,
+ const uint8_t *rss_key, uint32_t rss_key_len,
+ uint64_t hash_fields,
+ const uint16_t *queues, uint32_t queues_n)
+{
+ int err;
+ struct mlx5_ind_table_obj *ind_tbl = NULL;
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_hrxq *hrxq =
+ mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ], hrxq_idx);
+ int ret;
+
+ if (!hrxq) {
+ rte_errno = EINVAL;
+ return -rte_errno;
+ }
+ /* validations */
+ if (hrxq->rss_key_len != rss_key_len) {
+ /* rss_key_len is fixed size 40 byte & not supposed to change */
+ rte_errno = EINVAL;
+ return -rte_errno;
+ }
+ queues_n = hash_fields ? queues_n : 1;
+ if (mlx5_ind_table_obj_match_queues(hrxq->ind_table,
+ queues, queues_n)) {
+ ind_tbl = hrxq->ind_table;
+ } else {
+ ind_tbl = mlx5_ind_table_obj_get(dev, queues, queues_n);
+ if (!ind_tbl)
+ ind_tbl = mlx5_ind_table_obj_new(dev, queues, queues_n);
+ }
+ if (!ind_tbl) {
+ rte_errno = ENOMEM;
+ return -rte_errno;
+ }
+ MLX5_ASSERT(priv->obj_ops.hrxq_modify);
+ ret = priv->obj_ops.hrxq_modify(dev, hrxq, rss_key,
+ hash_fields, ind_tbl);
+ if (ret) {
+ rte_errno = errno;
+ goto error;
+ }
+ if (ind_tbl != hrxq->ind_table) {
+ mlx5_ind_table_obj_release(dev, hrxq->ind_table);
+ hrxq->ind_table = ind_tbl;
+ }
+ hrxq->hash_fields = hash_fields;
+ memcpy(hrxq->rss_key, rss_key, rss_key_len);
+ return 0;
+error:
+ err = rte_errno;
+ if (ind_tbl != hrxq->ind_table)
+ mlx5_ind_table_obj_release(dev, ind_tbl);
+ rte_errno = err;
+ return -rte_errno;
+}
+
/**
* Release the hash Rx queue.
*
@@ -1972,6 +2077,8 @@ mlx5_hrxq_release(struct rte_eth_dev *dev, uint32_t hrxq_idx)
* Number of queues.
* @param tunnel
* Tunnel type.
+ * @param shared
+ * If true new object of Rx Hash queue will be used in shared action.
*
* @return
* The DevX object initialized index, 0 otherwise and rte_errno is set.
@@ -1981,7 +2088,7 @@ mlx5_hrxq_new(struct rte_eth_dev *dev,
const uint8_t *rss_key, uint32_t rss_key_len,
uint64_t hash_fields,
const uint16_t *queues, uint32_t queues_n,
- int tunnel __rte_unused)
+ int tunnel, bool shared)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_hrxq *hrxq = NULL;
@@ -2000,6 +2107,7 @@ mlx5_hrxq_new(struct rte_eth_dev *dev,
hrxq = mlx5_ipool_zmalloc(priv->sh->ipool[MLX5_IPOOL_HRXQ], &hrxq_idx);
if (!hrxq)
goto error;
+ hrxq->shared = !!shared;
hrxq->ind_table = ind_tbl;
hrxq->rss_key_len = rss_key_len;
hrxq->hash_fields = hash_fields;
diff --git a/drivers/net/mlx5/mlx5_rxtx.h b/drivers/net/mlx5/mlx5_rxtx.h
index 0eafa22d63..1b35a2669c 100644
--- a/drivers/net/mlx5/mlx5_rxtx.h
+++ b/drivers/net/mlx5/mlx5_rxtx.h
@@ -339,7 +339,7 @@ uint32_t mlx5_hrxq_new(struct rte_eth_dev *dev,
const uint8_t *rss_key, uint32_t rss_key_len,
uint64_t hash_fields,
const uint16_t *queues, uint32_t queues_n,
- int tunnel __rte_unused);
+ int tunnel, bool shared);
uint32_t mlx5_hrxq_get(struct rte_eth_dev *dev,
const uint8_t *rss_key, uint32_t rss_key_len,
uint64_t hash_fields,
@@ -352,7 +352,10 @@ void mlx5_drop_action_destroy(struct rte_eth_dev *dev);
uint64_t mlx5_get_rx_port_offloads(void);
uint64_t mlx5_get_rx_queue_offloads(struct rte_eth_dev *dev);
void mlx5_rxq_timestamp_set(struct rte_eth_dev *dev);
-
+int mlx5_hrxq_modify(struct rte_eth_dev *dev, uint32_t hxrq_idx,
+ const uint8_t *rss_key, uint32_t rss_key_len,
+ uint64_t hash_fields,
+ const uint16_t *queues, uint32_t queues_n);
/* mlx5_txq.c */
--
2.26.2
^ permalink raw reply [flat|nested] 19+ messages in thread
* [dpdk-dev] [PATCH v2 3/4] net/mlx5: shared action PMD
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 0/4] Shared action RSS PMD impl Andrey Vesnovaty
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 1/4] common/mlx5: modify advanced Rx object via DevX Andrey Vesnovaty
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 2/4] net/mlx5: modify hash Rx queue objects Andrey Vesnovaty
@ 2020-10-23 10:24 ` Andrey Vesnovaty
2020-10-23 14:17 ` Slava Ovsiienko
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 4/4] net/mlx5: driver support for shared action Andrey Vesnovaty
2020-10-25 12:43 ` [dpdk-dev] [PATCH v2 0/4] Shared action RSS PMD impl Raslan Darawsheh
4 siblings, 1 reply; 19+ messages in thread
From: Andrey Vesnovaty @ 2020-10-23 10:24 UTC (permalink / raw)
To: dev
Cc: jer, jerinjacobk, thomas, ferruh.yigit, stephen,
bruce.richardson, orika, viacheslavo, andrey.vesnovaty, mdr,
nhorman, ajit.khaparde, samik.gupta, Matan Azrad, Shahaf Shuler
Handle shared action on flow validation/creation/destruction.
mlx5 PMD translates shared action into a regular one before handling
flow validation/creation. The shared action translation applied to
utilize the same execution path for both shared and regular actions.
The current implementation supports shared action translation for shared
RSS action only.
RSS action validation split to validate shared RSS action on its creation
in addition to action validation in flow validation/creation path.
Implement rte_flow shared action API for mlx5 PMD, mostly forwarding
calls to flow driver operations (see struct mlx5_flow_driver_ops).
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
---
drivers/net/mlx5/mlx5.c | 1 +
drivers/net/mlx5/mlx5.h | 2 +
drivers/net/mlx5/mlx5_defs.h | 3 +
drivers/net/mlx5/mlx5_flow.c | 499 +++++++++++++++++++++++++++++++++--
drivers/net/mlx5/mlx5_flow.h | 86 ++++++
5 files changed, 563 insertions(+), 28 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index e4ce9a9cb7..2484251b2f 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -1401,6 +1401,7 @@ mlx5_dev_close(struct rte_eth_dev *dev)
* then this will return directly without any action.
*/
mlx5_flow_list_flush(dev, &priv->flows, true);
+ mlx5_shared_action_flush(dev);
mlx5_flow_meter_flush(dev, NULL);
/* Free the intermediate buffers for flow creation. */
mlx5_flow_free_intermediate(dev);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 9be3061165..658533de6f 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -891,6 +891,8 @@ struct mlx5_priv {
uint8_t fdb_def_rule; /* Whether fdb jump to table 1 is configured. */
struct mlx5_mp_id mp_id; /* ID of a multi-process process */
LIST_HEAD(fdir, mlx5_fdir_flow) fdir_flows; /* fdir flows. */
+ LIST_HEAD(shared_action, rte_flow_shared_action) shared_actions;
+ /* shared actions */
};
#define PORT_ID(priv) ((priv)->dev_data->port_id)
diff --git a/drivers/net/mlx5/mlx5_defs.h b/drivers/net/mlx5/mlx5_defs.h
index 0df47391ee..22e41df1eb 100644
--- a/drivers/net/mlx5/mlx5_defs.h
+++ b/drivers/net/mlx5/mlx5_defs.h
@@ -197,6 +197,9 @@
#define MLX5_HAIRPIN_QUEUE_STRIDE 6
#define MLX5_HAIRPIN_JUMBO_LOG_SIZE (14 + 2)
+/* Maximum number of shared actions supported by rte_flow */
+#define MLX5_MAX_SHARED_ACTIONS 1
+
/* Definition of static_assert found in /usr/include/assert.h */
#ifndef HAVE_STATIC_ASSERT
#define static_assert _Static_assert
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index d7243a878b..6077685430 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -548,6 +548,26 @@ static const struct mlx5_flow_expand_node mlx5_support_expansion[] = {
},
};
+static struct rte_flow_shared_action *
+mlx5_shared_action_create(struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error);
+static int mlx5_shared_action_destroy
+ (struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *shared_action,
+ struct rte_flow_error *error);
+static int mlx5_shared_action_update
+ (struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *shared_action,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error);
+static int mlx5_shared_action_query
+ (struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action *action,
+ void *data,
+ struct rte_flow_error *error);
+
static const struct rte_flow_ops mlx5_flow_ops = {
.validate = mlx5_flow_validate,
.create = mlx5_flow_create,
@@ -557,6 +577,10 @@ static const struct rte_flow_ops mlx5_flow_ops = {
.query = mlx5_flow_query,
.dev_dump = mlx5_flow_dev_dump,
.get_aged_flows = mlx5_flow_get_aged_flows,
+ .shared_action_create = mlx5_shared_action_create,
+ .shared_action_destroy = mlx5_shared_action_destroy,
+ .shared_action_update = mlx5_shared_action_update,
+ .shared_action_query = mlx5_shared_action_query,
};
/* Convert FDIR request to Generic flow. */
@@ -1326,16 +1350,10 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
/*
* Validate the rss action.
*
- * @param[in] action
- * Pointer to the queue action.
- * @param[in] action_flags
- * Bit-fields that holds the actions detected until now.
* @param[in] dev
* Pointer to the Ethernet device structure.
- * @param[in] attr
- * Attributes of flow that includes this action.
- * @param[in] item_flags
- * Items that were detected.
+ * @param[in] action
+ * Pointer to the queue action.
* @param[out] error
* Pointer to error structure.
*
@@ -1343,23 +1361,14 @@ mlx5_flow_validate_action_queue(const struct rte_flow_action *action,
* 0 on success, a negative errno value otherwise and rte_errno is set.
*/
int
-mlx5_flow_validate_action_rss(const struct rte_flow_action *action,
- uint64_t action_flags,
- struct rte_eth_dev *dev,
- const struct rte_flow_attr *attr,
- uint64_t item_flags,
- struct rte_flow_error *error)
+mlx5_validate_action_rss(struct rte_eth_dev *dev,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
{
struct mlx5_priv *priv = dev->data->dev_private;
const struct rte_flow_action_rss *rss = action->conf;
- int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
unsigned int i;
- if (action_flags & MLX5_FLOW_FATE_ACTIONS)
- return rte_flow_error_set(error, EINVAL,
- RTE_FLOW_ERROR_TYPE_ACTION, NULL,
- "can't have 2 fate actions"
- " in same flow");
if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT &&
rss->func != RTE_ETH_HASH_FUNCTION_TOEPLITZ)
return rte_flow_error_set(error, ENOTSUP,
@@ -1433,6 +1442,48 @@ mlx5_flow_validate_action_rss(const struct rte_flow_action *action,
(error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_CONF,
&rss->queue[i], "queue is not configured");
}
+ return 0;
+}
+
+/*
+ * Validate the rss action.
+ *
+ * @param[in] action
+ * Pointer to the queue action.
+ * @param[in] action_flags
+ * Bit-fields that holds the actions detected until now.
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in] attr
+ * Attributes of flow that includes this action.
+ * @param[in] item_flags
+ * Items that were detected.
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_flow_validate_action_rss(const struct rte_flow_action *action,
+ uint64_t action_flags,
+ struct rte_eth_dev *dev,
+ const struct rte_flow_attr *attr,
+ uint64_t item_flags,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_action_rss *rss = action->conf;
+ int tunnel = !!(item_flags & MLX5_FLOW_LAYER_TUNNEL);
+ int ret;
+
+ if (action_flags & MLX5_FLOW_FATE_ACTIONS)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "can't have 2 fate actions"
+ " in same flow");
+ ret = mlx5_validate_action_rss(dev, action, error);
+ if (ret)
+ return ret;
if (attr->egress)
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
@@ -3084,6 +3135,138 @@ flow_get_rss_action(const struct rte_flow_action actions[])
return NULL;
}
+/* maps shared action to translated non shared in some actions array */
+struct mlx5_translated_shared_action {
+ struct rte_flow_shared_action *action; /**< Shared action */
+ int index; /**< Index in related array of rte_flow_action */
+};
+
+/**
+ * Translates actions of type RTE_FLOW_ACTION_TYPE_SHARED to related
+ * non shared action if translation possible.
+ * This functionality used to run same execution path for both shared & non
+ * shared actions on flow create. All necessary preparations for shared
+ * action handling should be preformed on *shared* actions list returned
+ * from this call.
+ *
+ * @param[in] actions
+ * List of actions to translate.
+ * @param[out] shared
+ * List to store translated shared actions.
+ * @param[in, out] shared_n
+ * Size of *shared* array. On return should be updated with number of shared
+ * actions retrieved from the *actions* list.
+ * @param[out] translated_actions
+ * List of actions where all shared actions were translated to non shared
+ * if possible. NULL if no translation took place.
+ * @param[out] error
+ * Pointer to the error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+flow_shared_actions_translate(const struct rte_flow_action actions[],
+ struct mlx5_translated_shared_action *shared,
+ int *shared_n,
+ struct rte_flow_action **translated_actions,
+ struct rte_flow_error *error)
+{
+ struct rte_flow_action *translated = NULL;
+ size_t actions_size;
+ int n;
+ int copied_n = 0;
+ struct mlx5_translated_shared_action *shared_end = NULL;
+
+ for (n = 0; actions[n].type != RTE_FLOW_ACTION_TYPE_END; n++) {
+ if (actions[n].type != RTE_FLOW_ACTION_TYPE_SHARED)
+ continue;
+ if (copied_n == *shared_n) {
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION_NUM,
+ NULL, "too many shared actions");
+ }
+ rte_memcpy(&shared[copied_n].action, &actions[n].conf,
+ sizeof(actions[n].conf));
+ shared[copied_n].index = n;
+ copied_n++;
+ }
+ n++;
+ *shared_n = copied_n;
+ if (!copied_n)
+ return 0;
+ actions_size = sizeof(struct rte_flow_action) * n;
+ translated = mlx5_malloc(MLX5_MEM_ZERO, actions_size, 0, SOCKET_ID_ANY);
+ if (!translated) {
+ rte_errno = ENOMEM;
+ return -ENOMEM;
+ }
+ memcpy(translated, actions, actions_size);
+ for (shared_end = shared + copied_n; shared < shared_end; shared++) {
+ const struct rte_flow_shared_action *shared_action;
+
+ shared_action = shared->action;
+ switch (shared_action->type) {
+ case MLX5_RTE_FLOW_ACTION_TYPE_SHARED_RSS:
+ translated[shared->index].type =
+ RTE_FLOW_ACTION_TYPE_RSS;
+ translated[shared->index].conf =
+ &shared_action->rss.origin;
+ break;
+ default:
+ mlx5_free(translated);
+ return rte_flow_error_set
+ (error, EINVAL, RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "invalid shared action type");
+ }
+ }
+ *translated_actions = translated;
+ return 0;
+}
+
+/**
+ * Get Shared RSS action from the action list.
+ *
+ * @param[in] shared
+ * Pointer to the list of actions.
+ * @param[in] shared_n
+ * Actions list length.
+ *
+ * @return
+ * Pointer to the MLX5 RSS action if exists, otherwise return NULL.
+ */
+static struct mlx5_shared_action_rss *
+flow_get_shared_rss_action(struct mlx5_translated_shared_action *shared,
+ int shared_n)
+{
+ struct mlx5_translated_shared_action *shared_end;
+
+ for (shared_end = shared + shared_n; shared < shared_end; shared++) {
+ struct rte_flow_shared_action *shared_action;
+
+ shared_action = shared->action;
+ switch (shared_action->type) {
+ case MLX5_RTE_FLOW_ACTION_TYPE_SHARED_RSS:
+ __atomic_add_fetch(&shared_action->refcnt, 1,
+ __ATOMIC_RELAXED);
+ return &shared_action->rss;
+ default:
+ break;
+ }
+ }
+ return NULL;
+}
+
+struct rte_flow_shared_action *
+mlx5_flow_get_shared_rss(struct rte_flow *flow)
+{
+ if (flow->shared_rss)
+ return container_of(flow->shared_rss,
+ struct rte_flow_shared_action, rss);
+ else
+ return NULL;
+}
+
static unsigned int
find_graph_root(const struct rte_flow_item pattern[], uint32_t rss_level)
{
@@ -5032,13 +5215,16 @@ static uint32_t
flow_list_create(struct rte_eth_dev *dev, uint32_t *list,
const struct rte_flow_attr *attr,
const struct rte_flow_item items[],
- const struct rte_flow_action actions[],
+ const struct rte_flow_action original_actions[],
bool external, struct rte_flow_error *error)
{
struct mlx5_priv *priv = dev->data->dev_private;
struct rte_flow *flow = NULL;
struct mlx5_flow *dev_flow;
const struct rte_flow_action_rss *rss;
+ struct mlx5_translated_shared_action
+ shared_actions[MLX5_MAX_SHARED_ACTIONS];
+ int shared_actions_n = MLX5_MAX_SHARED_ACTIONS;
union {
struct mlx5_flow_expand_rss buf;
uint8_t buffer[2048];
@@ -5058,27 +5244,38 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list,
struct mlx5_flow_expand_rss *buf = &expand_buffer.buf;
struct mlx5_flow_rss_desc *rss_desc = &((struct mlx5_flow_rss_desc *)
priv->rss_desc)[!!priv->flow_idx];
- const struct rte_flow_action *p_actions_rx = actions;
+ const struct rte_flow_action *p_actions_rx;
uint32_t i;
uint32_t idx = 0;
int hairpin_flow;
uint32_t hairpin_id = 0;
struct rte_flow_attr attr_tx = { .priority = 0 };
struct rte_flow_attr attr_factor = {0};
- int ret;
-
+ const struct rte_flow_action *actions;
+ struct rte_flow_action *translated_actions = NULL;
+ int ret = flow_shared_actions_translate(original_actions,
+ shared_actions,
+ &shared_actions_n,
+ &translated_actions, error);
+
+ if (ret < 0) {
+ MLX5_ASSERT(translated_actions == NULL);
+ return 0;
+ }
+ actions = translated_actions ? translated_actions : original_actions;
memcpy((void *)&attr_factor, (const void *)attr, sizeof(*attr));
if (external)
attr_factor.group *= MLX5_FLOW_TABLE_FACTOR;
+ p_actions_rx = actions;
hairpin_flow = flow_check_hairpin_split(dev, &attr_factor, actions);
ret = flow_drv_validate(dev, &attr_factor, items, p_actions_rx,
external, hairpin_flow, error);
if (ret < 0)
- return 0;
+ goto error_before_hairpin_split;
if (hairpin_flow > 0) {
if (hairpin_flow > MLX5_MAX_SPLIT_ACTIONS) {
rte_errno = EINVAL;
- return 0;
+ goto error_before_hairpin_split;
}
flow_hairpin_split(dev, actions, actions_rx.actions,
actions_hairpin_tx.actions, items_tx.items,
@@ -5120,6 +5317,8 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list,
buf->entries = 1;
buf->entry[0].pattern = (void *)(uintptr_t)items;
}
+ flow->shared_rss = flow_get_shared_rss_action(shared_actions,
+ shared_actions_n);
/*
* Record the start index when there is a nested call. All sub-flows
* need to be translated before another calling.
@@ -5191,6 +5390,7 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list,
ILIST_INSERT(priv->sh->ipool[MLX5_IPOOL_RTE_FLOW], list, idx,
flow, next);
flow_rxq_flags_set(dev, flow);
+ rte_free(translated_actions);
/* Nested flow creation index recovery. */
priv->flow_idx = priv->flow_nested_idx;
if (priv->flow_nested_idx)
@@ -5212,6 +5412,8 @@ flow_list_create(struct rte_eth_dev *dev, uint32_t *list,
priv->flow_idx = priv->flow_nested_idx;
if (priv->flow_nested_idx)
priv->flow_nested_idx = 0;
+error_before_hairpin_split:
+ rte_free(translated_actions);
return 0;
}
@@ -5275,14 +5477,28 @@ int
mlx5_flow_validate(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item items[],
- const struct rte_flow_action actions[],
+ const struct rte_flow_action original_actions[],
struct rte_flow_error *error)
{
int hairpin_flow;
+ struct mlx5_translated_shared_action
+ shared_actions[MLX5_MAX_SHARED_ACTIONS];
+ int shared_actions_n = MLX5_MAX_SHARED_ACTIONS;
+ const struct rte_flow_action *actions;
+ struct rte_flow_action *translated_actions = NULL;
+ int ret = flow_shared_actions_translate(original_actions,
+ shared_actions,
+ &shared_actions_n,
+ &translated_actions, error);
+ if (ret)
+ return ret;
+ actions = translated_actions ? translated_actions : original_actions;
hairpin_flow = flow_check_hairpin_split(dev, attr, actions);
- return flow_drv_validate(dev, attr, items, actions,
+ ret = flow_drv_validate(dev, attr, items, actions,
true, hairpin_flow, error);
+ rte_free(translated_actions);
+ return ret;
}
/**
@@ -7081,3 +7297,230 @@ mlx5_flow_get_aged_flows(struct rte_eth_dev *dev, void **contexts,
dev->data->port_id);
return -ENOTSUP;
}
+
+/* Wrapper for driver action_validate op callback */
+static int
+flow_drv_action_validate(struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action *action,
+ const struct mlx5_flow_driver_ops *fops,
+ struct rte_flow_error *error)
+{
+ static const char err_msg[] = "shared action validation unsupported";
+
+ if (!fops->action_validate) {
+ DRV_LOG(ERR, "port %u %s.", dev->data->port_id, err_msg);
+ rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, err_msg);
+ return -rte_errno;
+ }
+ return fops->action_validate(dev, conf, action, error);
+}
+
+/**
+ * Destroys the shared action by handle.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ * @param[in] action
+ * Handle for the shared action to be destroyed.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. PMDs initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ *
+ * @note: wrapper for driver action_create op callback.
+ */
+static int
+mlx5_shared_action_destroy(struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *action,
+ struct rte_flow_error *error)
+{
+ static const char err_msg[] = "shared action destruction unsupported";
+ struct rte_flow_attr attr = { .transfer = 0 };
+ const struct mlx5_flow_driver_ops *fops =
+ flow_get_drv_ops(flow_get_drv_type(dev, &attr));
+
+ if (!fops->action_destroy) {
+ DRV_LOG(ERR, "port %u %s.", dev->data->port_id, err_msg);
+ rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, err_msg);
+ return -rte_errno;
+ }
+ return fops->action_destroy(dev, action, error);
+}
+
+/* Wrapper for driver action_destroy op callback */
+static int
+flow_drv_action_update(struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *action,
+ const void *action_conf,
+ const struct mlx5_flow_driver_ops *fops,
+ struct rte_flow_error *error)
+{
+ static const char err_msg[] = "shared action update unsupported";
+
+ if (!fops->action_update) {
+ DRV_LOG(ERR, "port %u %s.", dev->data->port_id, err_msg);
+ rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, err_msg);
+ return -rte_errno;
+ }
+ return fops->action_update(dev, action, action_conf, error);
+}
+
+/**
+ * Create shared action for reuse in multiple flow rules.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ * @param[in] action
+ * Action configuration for shared action creation.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. PMDs initialize this
+ * structure in case of error only.
+ * @return
+ * A valid handle in case of success, NULL otherwise and rte_errno is set.
+ */
+static struct rte_flow_shared_action *
+mlx5_shared_action_create(struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
+{
+ static const char err_msg[] = "shared action creation unsupported";
+ struct rte_flow_attr attr = { .transfer = 0 };
+ const struct mlx5_flow_driver_ops *fops =
+ flow_get_drv_ops(flow_get_drv_type(dev, &attr));
+
+ if (flow_drv_action_validate(dev, conf, action, fops, error))
+ return NULL;
+ if (!fops->action_create) {
+ DRV_LOG(ERR, "port %u %s.", dev->data->port_id, err_msg);
+ rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, err_msg);
+ return NULL;
+ }
+ return fops->action_create(dev, conf, action, error);
+}
+
+/**
+ * Updates inplace the shared action configuration pointed by *action* handle
+ * with the configuration provided as *action* argument.
+ * The update of the shared action configuration effects all flow rules reusing
+ * the action via handle.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ * @param[in] shared_action
+ * Handle for the shared action to be updated.
+ * @param[in] action
+ * Action specification used to modify the action pointed by handle.
+ * *action* should be of same type with the action pointed by the *action*
+ * handle argument, otherwise considered as invalid.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. PMDs initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_shared_action_update(struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *shared_action,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
+{
+ struct rte_flow_attr attr = { .transfer = 0 };
+ const struct mlx5_flow_driver_ops *fops =
+ flow_get_drv_ops(flow_get_drv_type(dev, &attr));
+ int ret;
+
+ switch (shared_action->type) {
+ case MLX5_RTE_FLOW_ACTION_TYPE_SHARED_RSS:
+ if (action->type != RTE_FLOW_ACTION_TYPE_RSS) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL,
+ "update action type invalid");
+ }
+ ret = flow_drv_action_validate(dev, NULL, action, fops, error);
+ if (ret)
+ return ret;
+ return flow_drv_action_update(dev, shared_action, action->conf,
+ fops, error);
+ default:
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL,
+ "action type not supported");
+ }
+}
+
+/**
+ * Query the shared action by handle.
+ *
+ * This function allows retrieving action-specific data such as counters.
+ * Data is gathered by special action which may be present/referenced in
+ * more than one flow rule definition.
+ *
+ * \see RTE_FLOW_ACTION_TYPE_COUNT
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ * @param[in] action
+ * Handle for the shared action to query.
+ * @param[in, out] data
+ * Pointer to storage for the associated query data type.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. PMDs initialize this
+ * structure in case of error only.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+static int
+mlx5_shared_action_query(struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action *action,
+ void *data,
+ struct rte_flow_error *error)
+{
+ (void)dev;
+ switch (action->type) {
+ case MLX5_RTE_FLOW_ACTION_TYPE_SHARED_RSS:
+ __atomic_load(&action->refcnt, (uint32_t *)data,
+ __ATOMIC_RELAXED);
+ return 0;
+ default:
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL,
+ "action type not supported");
+ }
+}
+
+/**
+ * Destroy all shared actions.
+ *
+ * @param dev
+ * Pointer to Ethernet device.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_shared_action_flush(struct rte_eth_dev *dev)
+{
+ struct rte_flow_error error;
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct rte_flow_shared_action *action;
+ int ret = 0;
+
+ while (!LIST_EMPTY(&priv->shared_actions)) {
+ action = LIST_FIRST(&priv->shared_actions);
+ ret = mlx5_shared_action_destroy(dev, action, &error);
+ }
+ return ret;
+}
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index b4be4769ef..7faab43fe6 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -35,6 +35,7 @@ enum mlx5_rte_flow_action_type {
MLX5_RTE_FLOW_ACTION_TYPE_MARK,
MLX5_RTE_FLOW_ACTION_TYPE_COPY_MREG,
MLX5_RTE_FLOW_ACTION_TYPE_DEFAULT_MISS,
+ MLX5_RTE_FLOW_ACTION_TYPE_SHARED_RSS,
};
/* Matches on selected register. */
@@ -916,6 +917,7 @@ struct mlx5_fdir_flow {
/* Flow structure. */
struct rte_flow {
ILIST_ENTRY(uint32_t)next; /**< Index to the next flow structure. */
+ struct mlx5_shared_action_rss *shared_rss; /** < Shred RSS action. */
uint32_t dev_handles;
/**< Device flow handles that are part of the flow. */
uint32_t drv_type:2; /**< Driver type. */
@@ -929,6 +931,62 @@ struct rte_flow {
uint16_t meter; /**< Holds flow meter id. */
} __rte_packed;
+/*
+ * Define list of valid combinations of RX Hash fields
+ * (see enum ibv_rx_hash_fields).
+ */
+#define MLX5_RSS_HASH_IPV4 (IBV_RX_HASH_SRC_IPV4 | IBV_RX_HASH_DST_IPV4)
+#define MLX5_RSS_HASH_IPV4_TCP \
+ (MLX5_RSS_HASH_IPV4 | \
+ IBV_RX_HASH_SRC_PORT_TCP | IBV_RX_HASH_SRC_PORT_TCP)
+#define MLX5_RSS_HASH_IPV4_UDP \
+ (MLX5_RSS_HASH_IPV4 | \
+ IBV_RX_HASH_SRC_PORT_UDP | IBV_RX_HASH_SRC_PORT_UDP)
+#define MLX5_RSS_HASH_IPV6 (IBV_RX_HASH_SRC_IPV6 | IBV_RX_HASH_DST_IPV6)
+#define MLX5_RSS_HASH_IPV6_TCP \
+ (MLX5_RSS_HASH_IPV6 | \
+ IBV_RX_HASH_SRC_PORT_TCP | IBV_RX_HASH_SRC_PORT_TCP)
+#define MLX5_RSS_HASH_IPV6_UDP \
+ (MLX5_RSS_HASH_IPV6 | \
+ IBV_RX_HASH_SRC_PORT_UDP | IBV_RX_HASH_SRC_PORT_UDP)
+#define MLX5_RSS_HASH_NONE 0ULL
+
+/* array of valid combinations of RX Hash fields for RSS */
+static const uint64_t mlx5_rss_hash_fields[] = {
+ MLX5_RSS_HASH_IPV4,
+ MLX5_RSS_HASH_IPV4_TCP,
+ MLX5_RSS_HASH_IPV4_UDP,
+ MLX5_RSS_HASH_IPV6,
+ MLX5_RSS_HASH_IPV6_TCP,
+ MLX5_RSS_HASH_IPV6_UDP,
+ MLX5_RSS_HASH_NONE,
+};
+
+#define MLX5_RSS_HASH_FIELDS_LEN RTE_DIM(mlx5_rss_hash_fields)
+
+/* Shared RSS action structure */
+struct mlx5_shared_action_rss {
+ struct rte_flow_action_rss origin; /**< Original rte RSS action. */
+ uint8_t key[MLX5_RSS_HASH_KEY_LEN]; /**< RSS hash key. */
+ uint16_t *queue; /**< Queue indices to use. */
+ uint32_t hrxq[MLX5_RSS_HASH_FIELDS_LEN];
+ /**< Hash RX queue indexes mapped to mlx5_rss_hash_fields */
+ uint32_t hrxq_tunnel[MLX5_RSS_HASH_FIELDS_LEN];
+ /**< Hash RX queue indexes for tunneled RSS */
+};
+
+struct rte_flow_shared_action {
+ LIST_ENTRY(rte_flow_shared_action) next;
+ /**< Pointer to the next element. */
+ uint32_t refcnt; /**< Atomically accessed refcnt. */
+ uint64_t type;
+ /**< Shared action type (see MLX5_FLOW_ACTION_SHARED_*). */
+ union {
+ struct mlx5_shared_action_rss rss;
+ /**< Shared RSS action. */
+ };
+};
+
typedef int (*mlx5_flow_validate_t)(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
const struct rte_flow_item items[],
@@ -983,6 +1041,25 @@ typedef int (*mlx5_flow_get_aged_flows_t)
void **context,
uint32_t nb_contexts,
struct rte_flow_error *error);
+typedef int (*mlx5_flow_action_validate_t)
+ (struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error);
+typedef struct rte_flow_shared_action *(*mlx5_flow_action_create_t)
+ (struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error);
+typedef int (*mlx5_flow_action_destroy_t)
+ (struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *action,
+ struct rte_flow_error *error);
+typedef int (*mlx5_flow_action_update_t)
+ (struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *action,
+ const void *action_conf,
+ struct rte_flow_error *error);
struct mlx5_flow_driver_ops {
mlx5_flow_validate_t validate;
mlx5_flow_prepare_t prepare;
@@ -999,6 +1076,10 @@ struct mlx5_flow_driver_ops {
mlx5_flow_counter_free_t counter_free;
mlx5_flow_counter_query_t counter_query;
mlx5_flow_get_aged_flows_t get_aged_flows;
+ mlx5_flow_action_validate_t action_validate;
+ mlx5_flow_action_create_t action_create;
+ mlx5_flow_action_destroy_t action_destroy;
+ mlx5_flow_action_update_t action_update;
};
/* mlx5_flow.c */
@@ -1024,6 +1105,9 @@ int mlx5_flow_get_reg_id(struct rte_eth_dev *dev,
const struct rte_flow_action *mlx5_flow_find_action
(const struct rte_flow_action *actions,
enum rte_flow_action_type action);
+int mlx5_validate_action_rss(struct rte_eth_dev *dev,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error);
int mlx5_flow_validate_action_count(struct rte_eth_dev *dev,
const struct rte_flow_attr *attr,
struct rte_flow_error *error);
@@ -1145,4 +1229,6 @@ int mlx5_flow_destroy_policer_rules(struct rte_eth_dev *dev,
int mlx5_flow_meter_flush(struct rte_eth_dev *dev,
struct rte_mtr_error *error);
int mlx5_flow_dv_discover_counter_offset_support(struct rte_eth_dev *dev);
+struct rte_flow_shared_action *mlx5_flow_get_shared_rss(struct rte_flow *flow);
+int mlx5_shared_action_flush(struct rte_eth_dev *dev);
#endif /* RTE_PMD_MLX5_FLOW_H_ */
--
2.26.2
^ permalink raw reply [flat|nested] 19+ messages in thread
* [dpdk-dev] [PATCH v2 4/4] net/mlx5: driver support for shared action
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 0/4] Shared action RSS PMD impl Andrey Vesnovaty
` (2 preceding siblings ...)
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 3/4] net/mlx5: shared action PMD Andrey Vesnovaty
@ 2020-10-23 10:24 ` Andrey Vesnovaty
2020-10-23 14:17 ` Slava Ovsiienko
2020-10-26 16:38 ` Ferruh Yigit
2020-10-25 12:43 ` [dpdk-dev] [PATCH v2 0/4] Shared action RSS PMD impl Raslan Darawsheh
4 siblings, 2 replies; 19+ messages in thread
From: Andrey Vesnovaty @ 2020-10-23 10:24 UTC (permalink / raw)
To: dev
Cc: jer, jerinjacobk, thomas, ferruh.yigit, stephen,
bruce.richardson, orika, viacheslavo, andrey.vesnovaty, mdr,
nhorman, ajit.khaparde, samik.gupta, Matan Azrad, Shahaf Shuler
Implement shared action create/destroy/update/query. The current
implementation support is limited to shared RSS action only. The shared
RSS action create operation prepares hash RX queue objects for all
supported permutations of the hash. The shared RSS action update
operation relies on functionality to modify hash RX queue introduced in
one of the previous commits in this patch series.
Implement RSS shared action and handle shared RSS on flow apply and
release. The lookup for hash RX queue object for RSS action is limited
to the set of objects stored in the shared action itself and when
handling shared RSS action. The lookup for hash RX queue object inside
shared action is performed by hash only.
Current implementation limited to DV flow driver operations i.e. verbs
flow driver operations doesn't support shared action.
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
---
drivers/net/mlx5/mlx5_flow_dv.c | 690 ++++++++++++++++++++++++++++++--
1 file changed, 666 insertions(+), 24 deletions(-)
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 2bac7dac9b..66d81e9598 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -9943,6 +9943,158 @@ __flow_dv_translate(struct rte_eth_dev *dev,
return 0;
}
+/**
+ * Set hash RX queue by hash fields (see enum ibv_rx_hash_fields)
+ * and tunnel.
+ *
+ * @param[in, out] action
+ * Shred RSS action holding hash RX queue objects.
+ * @param[in] hash_fields
+ * Defines combination of packet fields to participate in RX hash.
+ * @param[in] tunnel
+ * Tunnel type
+ * @param[in] hrxq_idx
+ * Hash RX queue index to set.
+ *
+ * @return
+ * 0 on success, otherwise negative errno value.
+ */
+static int
+__flow_dv_action_rss_hrxq_set(struct mlx5_shared_action_rss *action,
+ const uint64_t hash_fields,
+ const int tunnel,
+ uint32_t hrxq_idx)
+{
+ uint32_t *hrxqs = tunnel ? action->hrxq : action->hrxq_tunnel;
+
+ switch (hash_fields & ~IBV_RX_HASH_INNER) {
+ case MLX5_RSS_HASH_IPV4:
+ hrxqs[0] = hrxq_idx;
+ return 0;
+ case MLX5_RSS_HASH_IPV4_TCP:
+ hrxqs[1] = hrxq_idx;
+ return 0;
+ case MLX5_RSS_HASH_IPV4_UDP:
+ hrxqs[2] = hrxq_idx;
+ return 0;
+ case MLX5_RSS_HASH_IPV6:
+ hrxqs[3] = hrxq_idx;
+ return 0;
+ case MLX5_RSS_HASH_IPV6_TCP:
+ hrxqs[4] = hrxq_idx;
+ return 0;
+ case MLX5_RSS_HASH_IPV6_UDP:
+ hrxqs[5] = hrxq_idx;
+ return 0;
+ case MLX5_RSS_HASH_NONE:
+ hrxqs[6] = hrxq_idx;
+ return 0;
+ default:
+ return -1;
+ }
+}
+
+/**
+ * Look up for hash RX queue by hash fields (see enum ibv_rx_hash_fields)
+ * and tunnel.
+ *
+ * @param[in] action
+ * Shred RSS action holding hash RX queue objects.
+ * @param[in] hash_fields
+ * Defines combination of packet fields to participate in RX hash.
+ * @param[in] tunnel
+ * Tunnel type
+ *
+ * @return
+ * Valid hash RX queue index, otherwise 0.
+ */
+static uint32_t
+__flow_dv_action_rss_hrxq_lookup(const struct mlx5_shared_action_rss *action,
+ const uint64_t hash_fields,
+ const int tunnel)
+{
+ const uint32_t *hrxqs = tunnel ? action->hrxq : action->hrxq_tunnel;
+
+ switch (hash_fields & ~IBV_RX_HASH_INNER) {
+ case MLX5_RSS_HASH_IPV4:
+ return hrxqs[0];
+ case MLX5_RSS_HASH_IPV4_TCP:
+ return hrxqs[1];
+ case MLX5_RSS_HASH_IPV4_UDP:
+ return hrxqs[2];
+ case MLX5_RSS_HASH_IPV6:
+ return hrxqs[3];
+ case MLX5_RSS_HASH_IPV6_TCP:
+ return hrxqs[4];
+ case MLX5_RSS_HASH_IPV6_UDP:
+ return hrxqs[5];
+ case MLX5_RSS_HASH_NONE:
+ return hrxqs[6];
+ default:
+ return 0;
+ }
+}
+
+/**
+ * Retrieves hash RX queue suitable for the *flow*.
+ * If shared action configured for *flow* suitable hash RX queue will be
+ * retrieved from attached shared action.
+ *
+ * @param[in] flow
+ * Shred RSS action holding hash RX queue objects.
+ * @param[in] dev_flow
+ * Pointer to the sub flow.
+ * @param[out] hrxq
+ * Pointer to retrieved hash RX queue object.
+ *
+ * @return
+ * Valid hash RX queue index, otherwise 0 and rte_errno is set.
+ */
+static uint32_t
+__flow_dv_rss_get_hrxq(struct rte_eth_dev *dev, struct rte_flow *flow,
+ struct mlx5_flow *dev_flow,
+ struct mlx5_hrxq **hrxq)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ uint32_t hrxq_idx;
+
+ if (flow->shared_rss) {
+ hrxq_idx = __flow_dv_action_rss_hrxq_lookup
+ (flow->shared_rss, dev_flow->hash_fields,
+ !!(dev_flow->handle->layers &
+ MLX5_FLOW_LAYER_TUNNEL));
+ if (hrxq_idx) {
+ *hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ],
+ hrxq_idx);
+ rte_atomic32_inc(&(*hrxq)->refcnt);
+ }
+ } else {
+ struct mlx5_flow_rss_desc *rss_desc =
+ &((struct mlx5_flow_rss_desc *)priv->rss_desc)
+ [!!priv->flow_nested_idx];
+
+ MLX5_ASSERT(rss_desc->queue_num);
+ hrxq_idx = mlx5_hrxq_get(dev, rss_desc->key,
+ MLX5_RSS_HASH_KEY_LEN,
+ dev_flow->hash_fields,
+ rss_desc->queue, rss_desc->queue_num);
+ if (!hrxq_idx) {
+ hrxq_idx = mlx5_hrxq_new(dev,
+ rss_desc->key,
+ MLX5_RSS_HASH_KEY_LEN,
+ dev_flow->hash_fields,
+ rss_desc->queue,
+ rss_desc->queue_num,
+ !!(dev_flow->handle->layers &
+ MLX5_FLOW_LAYER_TUNNEL),
+ false);
+ }
+ *hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ],
+ hrxq_idx);
+ }
+ return hrxq_idx;
+}
+
/**
* Apply the flow to the NIC, lock free,
* (mutex should be acquired by caller).
@@ -10002,31 +10154,10 @@ __flow_dv_apply(struct rte_eth_dev *dev, struct rte_flow *flow,
}
} else if (dh->fate_action == MLX5_FLOW_FATE_QUEUE &&
!dv_h->rix_sample && !dv_h->rix_dest_array) {
- struct mlx5_hrxq *hrxq;
- uint32_t hrxq_idx;
- struct mlx5_flow_rss_desc *rss_desc =
- &((struct mlx5_flow_rss_desc *)priv->rss_desc)
- [!!priv->flow_nested_idx];
+ struct mlx5_hrxq *hrxq = NULL;
+ uint32_t hrxq_idx = __flow_dv_rss_get_hrxq
+ (dev, flow, dev_flow, &hrxq);
- MLX5_ASSERT(rss_desc->queue_num);
- hrxq_idx = mlx5_hrxq_get(dev, rss_desc->key,
- MLX5_RSS_HASH_KEY_LEN,
- dev_flow->hash_fields,
- rss_desc->queue,
- rss_desc->queue_num);
- if (!hrxq_idx) {
- hrxq_idx = mlx5_hrxq_new
- (dev, rss_desc->key,
- MLX5_RSS_HASH_KEY_LEN,
- dev_flow->hash_fields,
- rss_desc->queue,
- rss_desc->queue_num,
- !!(dh->layers &
- MLX5_FLOW_LAYER_TUNNEL),
- false);
- }
- hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ],
- hrxq_idx);
if (!hrxq) {
rte_flow_error_set
(error, rte_errno,
@@ -10579,12 +10710,16 @@ __flow_dv_remove(struct rte_eth_dev *dev, struct rte_flow *flow)
static void
__flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
{
+ struct rte_flow_shared_action *shared;
struct mlx5_flow_handle *dev_handle;
struct mlx5_priv *priv = dev->data->dev_private;
if (!flow)
return;
__flow_dv_remove(dev, flow);
+ shared = mlx5_flow_get_shared_rss(flow);
+ if (shared)
+ __atomic_sub_fetch(&shared->refcnt, 1, __ATOMIC_RELAXED);
if (flow->counter) {
flow_dv_counter_release(dev, flow->counter);
flow->counter = 0;
@@ -10629,6 +10764,423 @@ __flow_dv_destroy(struct rte_eth_dev *dev, struct rte_flow *flow)
}
}
+/**
+ * Release array of hash RX queue objects.
+ * Helper function.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in, out] hrxqs
+ * Array of hash RX queue objects.
+ *
+ * @return
+ * Total number of references to hash RX queue objects in *hrxqs* array
+ * after this operation.
+ */
+static int
+__flow_dv_hrxqs_release(struct rte_eth_dev *dev,
+ uint32_t (*hrxqs)[MLX5_RSS_HASH_FIELDS_LEN])
+{
+ size_t i;
+ int remaining = 0;
+
+ for (i = 0; i < RTE_DIM(*hrxqs); i++) {
+ int ret = mlx5_hrxq_release(dev, (*hrxqs)[i]);
+
+ if (!ret)
+ (*hrxqs)[i] = 0;
+ remaining += ret;
+ }
+ return remaining;
+}
+
+/**
+ * Release all hash RX queue objects representing shared RSS action.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in, out] action
+ * Shared RSS action to remove hash RX queue objects from.
+ *
+ * @return
+ * Total number of references to hash RX queue objects stored in *action*
+ * after this operation.
+ * Expected to be 0 if no external references held.
+ */
+static int
+__flow_dv_action_rss_hrxqs_release(struct rte_eth_dev *dev,
+ struct mlx5_shared_action_rss *action)
+{
+ return __flow_dv_hrxqs_release(dev, &action->hrxq) +
+ __flow_dv_hrxqs_release(dev, &action->hrxq_tunnel);
+}
+
+/**
+ * Setup shared RSS action.
+ * Prepare set of hash RX queue objects sufficient to handle all valid
+ * hash_fields combinations (see enum ibv_rx_hash_fields).
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in, out] action
+ * Partially initialized shared RSS action.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialized in case of
+ * error only.
+ *
+ * @return
+ * 0 on success, otherwise negative errno value.
+ */
+static int
+__flow_dv_action_rss_setup(struct rte_eth_dev *dev,
+ struct mlx5_shared_action_rss *action,
+ struct rte_flow_error *error)
+{
+ size_t i;
+ int err;
+
+ for (i = 0; i < MLX5_RSS_HASH_FIELDS_LEN; i++) {
+ uint32_t hrxq_idx;
+ uint64_t hash_fields = mlx5_rss_hash_fields[i];
+ int tunnel;
+
+ for (tunnel = 0; tunnel < 2; tunnel++) {
+ hrxq_idx = mlx5_hrxq_new(dev, action->origin.key,
+ MLX5_RSS_HASH_KEY_LEN,
+ hash_fields,
+ action->origin.queue,
+ action->origin.queue_num,
+ tunnel, true);
+ if (!hrxq_idx) {
+ rte_flow_error_set
+ (error, rte_errno,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "cannot get hash queue");
+ goto error_hrxq_new;
+ }
+ err = __flow_dv_action_rss_hrxq_set
+ (action, hash_fields, tunnel, hrxq_idx);
+ MLX5_ASSERT(!err);
+ }
+ }
+ return 0;
+error_hrxq_new:
+ err = rte_errno;
+ __flow_dv_action_rss_hrxqs_release(dev, action);
+ rte_errno = err;
+ return -rte_errno;
+}
+
+/**
+ * Create shared RSS action.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in] conf
+ * Shared action configuration.
+ * @param[in] rss
+ * RSS action specification used to create shared action.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialized in case of
+ * error only.
+ *
+ * @return
+ * A valid shared action handle in case of success, NULL otherwise and
+ * rte_errno is set.
+ */
+static struct rte_flow_shared_action *
+__flow_dv_action_rss_create(struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action_rss *rss,
+ struct rte_flow_error *error)
+{
+ struct rte_flow_shared_action *shared_action = NULL;
+ void *queue = NULL;
+ struct mlx5_shared_action_rss *shared_rss;
+ struct rte_flow_action_rss *origin;
+ const uint8_t *rss_key;
+ uint32_t queue_size = rss->queue_num * sizeof(uint16_t);
+
+ RTE_SET_USED(conf);
+ queue = mlx5_malloc(0, RTE_ALIGN_CEIL(queue_size, sizeof(void *)),
+ 0, SOCKET_ID_ANY);
+ shared_action = mlx5_malloc(MLX5_MEM_ZERO, sizeof(*shared_action), 0,
+ SOCKET_ID_ANY);
+ if (!shared_action || !queue) {
+ rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL,
+ "cannot allocate resource memory");
+ goto error_rss_init;
+ }
+ shared_rss = &shared_action->rss;
+ shared_rss->queue = queue;
+ origin = &shared_rss->origin;
+ origin->func = rss->func;
+ origin->level = rss->level;
+ /* RSS type 0 indicates default RSS type (ETH_RSS_IP). */
+ origin->types = !rss->types ? ETH_RSS_IP : rss->types;
+ /* NULL RSS key indicates default RSS key. */
+ rss_key = !rss->key ? rss_hash_default_key : rss->key;
+ memcpy(shared_rss->key, rss_key, MLX5_RSS_HASH_KEY_LEN);
+ origin->key = &shared_rss->key[0];
+ origin->key_len = MLX5_RSS_HASH_KEY_LEN;
+ memcpy(shared_rss->queue, rss->queue, queue_size);
+ origin->queue = shared_rss->queue;
+ origin->queue_num = rss->queue_num;
+ if (__flow_dv_action_rss_setup(dev, shared_rss, error))
+ goto error_rss_init;
+ shared_action->type = MLX5_RTE_FLOW_ACTION_TYPE_SHARED_RSS;
+ return shared_action;
+error_rss_init:
+ mlx5_free(shared_action);
+ mlx5_free(queue);
+ return NULL;
+}
+
+/**
+ * Destroy the shared RSS action.
+ * Release related hash RX queue objects.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in] shared_rss
+ * The shared RSS action object to be removed.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialized in case of
+ * error only.
+ *
+ * @return
+ * 0 on success, otherwise negative errno value.
+ */
+static int
+__flow_dv_action_rss_release(struct rte_eth_dev *dev,
+ struct mlx5_shared_action_rss *shared_rss,
+ struct rte_flow_error *error)
+{
+ struct rte_flow_shared_action *shared_action = NULL;
+ uint32_t old_refcnt = 1;
+ int remaining = __flow_dv_action_rss_hrxqs_release(dev, shared_rss);
+
+ if (remaining) {
+ return rte_flow_error_set(error, ETOOMANYREFS,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL,
+ "shared rss hrxq has references");
+ }
+ shared_action = container_of(shared_rss,
+ struct rte_flow_shared_action, rss);
+ if (!__atomic_compare_exchange_n(&shared_action->refcnt, &old_refcnt,
+ 0, 0,
+ __ATOMIC_ACQUIRE, __ATOMIC_RELAXED)) {
+ return rte_flow_error_set(error, ETOOMANYREFS,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL,
+ "shared rss has references");
+ }
+ rte_free(shared_rss->queue);
+ return 0;
+}
+
+/**
+ * Create shared action, lock free,
+ * (mutex should be acquired by caller).
+ * Dispatcher for action type specific call.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in] conf
+ * Shared action configuration.
+ * @param[in] action
+ * Action specification used to create shared action.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialized in case of
+ * error only.
+ *
+ * @return
+ * A valid shared action handle in case of success, NULL otherwise and
+ * rte_errno is set.
+ */
+static struct rte_flow_shared_action *
+__flow_dv_action_create(struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
+{
+ struct rte_flow_shared_action *shared_action = NULL;
+ struct mlx5_priv *priv = dev->data->dev_private;
+
+ switch (action->type) {
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ shared_action = __flow_dv_action_rss_create(dev, conf,
+ action->conf,
+ error);
+ break;
+ default:
+ rte_flow_error_set(error, ENOTSUP, RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL, "action type not supported");
+ break;
+ }
+ if (shared_action) {
+ __atomic_add_fetch(&shared_action->refcnt, 1,
+ __ATOMIC_RELAXED);
+ LIST_INSERT_HEAD(&priv->shared_actions, shared_action, next);
+ }
+ return shared_action;
+}
+
+/**
+ * Destroy the shared action.
+ * Release action related resources on the NIC and the memory.
+ * Lock free, (mutex should be acquired by caller).
+ * Dispatcher for action type specific call.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in] action
+ * The shared action object to be removed.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialized in case of
+ * error only.
+ *
+ * @return
+ * 0 on success, otherwise negative errno value.
+ */
+static int
+__flow_dv_action_destroy(struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *action,
+ struct rte_flow_error *error)
+{
+ int ret;
+
+ switch (action->type) {
+ case MLX5_RTE_FLOW_ACTION_TYPE_SHARED_RSS:
+ ret = __flow_dv_action_rss_release(dev, &action->rss, error);
+ break;
+ default:
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL,
+ "action type not supported");
+ }
+ if (ret)
+ return ret;
+ LIST_REMOVE(action, next);
+ rte_free(action);
+ return 0;
+}
+
+/**
+ * Updates in place shared RSS action configuration.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in] shared_rss
+ * The shared RSS action object to be updated.
+ * @param[in] action_conf
+ * RSS action specification used to modify *shared_rss*.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialized in case of
+ * error only.
+ *
+ * @return
+ * 0 on success, otherwise negative errno value.
+ * @note: currently only support update of RSS queues.
+ */
+static int
+__flow_dv_action_rss_update(struct rte_eth_dev *dev,
+ struct mlx5_shared_action_rss *shared_rss,
+ const struct rte_flow_action_rss *action_conf,
+ struct rte_flow_error *error)
+{
+ size_t i;
+ int ret;
+ void *queue = NULL;
+ const uint8_t *rss_key;
+ uint32_t rss_key_len;
+ uint32_t queue_size = action_conf->queue_num * sizeof(uint16_t);
+
+ queue = mlx5_malloc(MLX5_MEM_ZERO,
+ RTE_ALIGN_CEIL(queue_size, sizeof(void *)),
+ 0, SOCKET_ID_ANY);
+ if (!queue)
+ return rte_flow_error_set(error, ENOMEM,
+ RTE_FLOW_ERROR_TYPE_UNSPECIFIED,
+ NULL,
+ "cannot allocate resource memory");
+ if (action_conf->key) {
+ rss_key = action_conf->key;
+ rss_key_len = action_conf->key_len;
+ } else {
+ rss_key = rss_hash_default_key;
+ rss_key_len = MLX5_RSS_HASH_KEY_LEN;
+ }
+ for (i = 0; i < MLX5_RSS_HASH_FIELDS_LEN; i++) {
+ uint32_t hrxq_idx;
+ uint64_t hash_fields = mlx5_rss_hash_fields[i];
+ int tunnel;
+
+ for (tunnel = 0; tunnel < 2; tunnel++) {
+ hrxq_idx = __flow_dv_action_rss_hrxq_lookup
+ (shared_rss, hash_fields, tunnel);
+ MLX5_ASSERT(hrxq_idx);
+ ret = mlx5_hrxq_modify
+ (dev, hrxq_idx,
+ rss_key, rss_key_len,
+ hash_fields,
+ action_conf->queue, action_conf->queue_num);
+ if (ret) {
+ mlx5_free(queue);
+ return rte_flow_error_set
+ (error, rte_errno,
+ RTE_FLOW_ERROR_TYPE_ACTION, NULL,
+ "cannot update hash queue");
+ }
+ }
+ }
+ mlx5_free(shared_rss->queue);
+ shared_rss->queue = queue;
+ memcpy(shared_rss->queue, action_conf->queue, queue_size);
+ shared_rss->origin.queue = shared_rss->queue;
+ shared_rss->origin.queue_num = action_conf->queue_num;
+ return 0;
+}
+
+/**
+ * Updates in place shared action configuration, lock free,
+ * (mutex should be acquired by caller).
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in] action
+ * The shared action object to be updated.
+ * @param[in] action_conf
+ * Action specification used to modify *action*.
+ * *action_conf* should be of type correlating with type of the *action*,
+ * otherwise considered as invalid.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialized in case of
+ * error only.
+ *
+ * @return
+ * 0 on success, otherwise negative errno value.
+ */
+static int
+__flow_dv_action_update(struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *action,
+ const void *action_conf,
+ struct rte_flow_error *error)
+{
+ switch (action->type) {
+ case MLX5_RTE_FLOW_ACTION_TYPE_SHARED_RSS:
+ return __flow_dv_action_rss_update(dev, &action->rss,
+ action_conf, error);
+ default:
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL,
+ "action type not supported");
+ }
+}
/**
* Query a dv flow rule for its statistics via devx.
*
@@ -11453,6 +12005,92 @@ flow_dv_counter_free(struct rte_eth_dev *dev, uint32_t cnt)
flow_dv_shared_unlock(dev);
}
+/**
+ * Validate shared action.
+ * Dispatcher for action type specific validation.
+ *
+ * @param[in] dev
+ * Pointer to the Ethernet device structure.
+ * @param[in] conf
+ * Shared action configuration.
+ * @param[in] action
+ * The shared action object to validate.
+ * @param[out] error
+ * Perform verbose error reporting if not NULL. Initialized in case of
+ * error only.
+ *
+ * @return
+ * 0 on success, otherwise negative errno value.
+ */
+static int
+flow_dv_action_validate(struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
+{
+ RTE_SET_USED(conf);
+ switch (action->type) {
+ case RTE_FLOW_ACTION_TYPE_RSS:
+ return mlx5_validate_action_rss(dev, action, error);
+ default:
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ACTION,
+ NULL,
+ "action type not supported");
+ }
+}
+
+/*
+ * Mutex-protected thunk to lock-free __flow_dv_action_create().
+ */
+static struct rte_flow_shared_action *
+flow_dv_action_create(struct rte_eth_dev *dev,
+ const struct rte_flow_shared_action_conf *conf,
+ const struct rte_flow_action *action,
+ struct rte_flow_error *error)
+{
+ struct rte_flow_shared_action *shared_action = NULL;
+
+ flow_dv_shared_lock(dev);
+ shared_action = __flow_dv_action_create(dev, conf, action, error);
+ flow_dv_shared_unlock(dev);
+ return shared_action;
+}
+
+/*
+ * Mutex-protected thunk to lock-free __flow_dv_action_destroy().
+ */
+static int
+flow_dv_action_destroy(struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *action,
+ struct rte_flow_error *error)
+{
+ int ret;
+
+ flow_dv_shared_lock(dev);
+ ret = __flow_dv_action_destroy(dev, action, error);
+ flow_dv_shared_unlock(dev);
+ return ret;
+}
+
+/*
+ * Mutex-protected thunk to lock-free __flow_dv_action_update().
+ */
+static int
+flow_dv_action_update(struct rte_eth_dev *dev,
+ struct rte_flow_shared_action *action,
+ const void *action_conf,
+ struct rte_flow_error *error)
+{
+ int ret;
+
+ flow_dv_shared_lock(dev);
+ ret = __flow_dv_action_update(dev, action, action_conf,
+ error);
+ flow_dv_shared_unlock(dev);
+ return ret;
+}
+
const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = {
.validate = flow_dv_validate,
.prepare = flow_dv_prepare,
@@ -11469,6 +12107,10 @@ const struct mlx5_flow_driver_ops mlx5_flow_dv_drv_ops = {
.counter_free = flow_dv_counter_free,
.counter_query = flow_dv_counter_query,
.get_aged_flows = flow_get_aged_flows,
+ .action_validate = flow_dv_action_validate,
+ .action_create = flow_dv_action_create,
+ .action_destroy = flow_dv_action_destroy,
+ .action_update = flow_dv_action_update,
};
#endif /* HAVE_IBV_FLOW_DV_SUPPORT */
--
2.26.2
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [dpdk-dev] [PATCH v2 1/4] common/mlx5: modify advanced Rx object via DevX
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 1/4] common/mlx5: modify advanced Rx object via DevX Andrey Vesnovaty
@ 2020-10-23 14:16 ` Slava Ovsiienko
0 siblings, 0 replies; 19+ messages in thread
From: Slava Ovsiienko @ 2020-10-23 14:16 UTC (permalink / raw)
To: Andrey Vesnovaty, dev
Cc: jer, jerinjacobk, NBU-Contact-Thomas Monjalon, ferruh.yigit,
stephen, bruce.richardson, Ori Kam, andrey.vesnovaty, mdr,
nhorman, ajit.khaparde, samik.gupta, Matan Azrad, Shahaf Shuler
> -----Original Message-----
> From: Andrey Vesnovaty <andreyv@nvidia.com>
> Sent: Friday, October 23, 2020 13:24
> To: dev@dpdk.org
> Cc: jer@marvell.com; jerinjacobk@gmail.com; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; ferruh.yigit@intel.com;
> stephen@networkplumber.org; bruce.richardson@intel.com; Ori Kam
> <orika@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>;
> andrey.vesnovaty@gmail.com; mdr@ashroe.eu; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; samik.gupta@broadcom.com; Matan Azrad
> <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>
> Subject: [PATCH v2 1/4] common/mlx5: modify advanced Rx object via DevX
>
> Implement TIR modification (see mlx5_devx_cmd_modify_tir()) using DevX
> API. TIR is the object containing the hashed table of Rx queue. The
> functionality to configure/modify this HW-related object is prerequisite to
> implement rete_flow_shared_action_update() for shared RSS action in
> mlx5 PMD. HW-related structures for TIR modification add in mlx5_prm.h.
>
> Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [dpdk-dev] [PATCH v2 2/4] net/mlx5: modify hash Rx queue objects
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 2/4] net/mlx5: modify hash Rx queue objects Andrey Vesnovaty
@ 2020-10-23 14:17 ` Slava Ovsiienko
0 siblings, 0 replies; 19+ messages in thread
From: Slava Ovsiienko @ 2020-10-23 14:17 UTC (permalink / raw)
To: Andrey Vesnovaty, dev
Cc: jer, jerinjacobk, NBU-Contact-Thomas Monjalon, ferruh.yigit,
stephen, bruce.richardson, Ori Kam, andrey.vesnovaty, mdr,
nhorman, ajit.khaparde, samik.gupta, Matan Azrad, Shahaf Shuler
> -----Original Message-----
> From: Andrey Vesnovaty <andreyv@nvidia.com>
> Sent: Friday, October 23, 2020 13:24
> To: dev@dpdk.org
> Cc: jer@marvell.com; jerinjacobk@gmail.com; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; ferruh.yigit@intel.com;
> stephen@networkplumber.org; bruce.richardson@intel.com; Ori Kam
> <orika@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>;
> andrey.vesnovaty@gmail.com; mdr@ashroe.eu; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; samik.gupta@broadcom.com; Matan Azrad
> <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>
> Subject: [PATCH v2 2/4] net/mlx5: modify hash Rx queue objects
>
> Implement modification for hashed table of Rx queue object (see
> mlx5_hrxq_modify()). This implementation relies on the capability to modify
> TIR object via DevX API, i.e. current implementation doesn't support verbs HW
> object operations. The functionality to modify hashed table of Rx queue object
> is prerequisite to implement
> rete_flow_shared_action_update() for shared RSS action in mlx5 PMD.
>
> Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/mlx5: driver support for shared action
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 4/4] net/mlx5: driver support for shared action Andrey Vesnovaty
@ 2020-10-23 14:17 ` Slava Ovsiienko
2020-10-26 16:38 ` Ferruh Yigit
1 sibling, 0 replies; 19+ messages in thread
From: Slava Ovsiienko @ 2020-10-23 14:17 UTC (permalink / raw)
To: Andrey Vesnovaty, dev
Cc: jer, jerinjacobk, NBU-Contact-Thomas Monjalon, ferruh.yigit,
stephen, bruce.richardson, Ori Kam, andrey.vesnovaty, mdr,
nhorman, ajit.khaparde, samik.gupta, Matan Azrad, Shahaf Shuler
> -----Original Message-----
> From: Andrey Vesnovaty <andreyv@nvidia.com>
> Sent: Friday, October 23, 2020 13:24
> To: dev@dpdk.org
> Cc: jer@marvell.com; jerinjacobk@gmail.com; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; ferruh.yigit@intel.com;
> stephen@networkplumber.org; bruce.richardson@intel.com; Ori Kam
> <orika@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>;
> andrey.vesnovaty@gmail.com; mdr@ashroe.eu; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; samik.gupta@broadcom.com; Matan Azrad
> <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>
> Subject: [PATCH v2 4/4] net/mlx5: driver support for shared action
>
> Implement shared action create/destroy/update/query. The current
> implementation support is limited to shared RSS action only. The shared RSS
> action create operation prepares hash RX queue objects for all supported
> permutations of the hash. The shared RSS action update operation relies on
> functionality to modify hash RX queue introduced in one of the previous
> commits in this patch series.
>
> Implement RSS shared action and handle shared RSS on flow apply and release.
> The lookup for hash RX queue object for RSS action is limited to the set of
> objects stored in the shared action itself and when handling shared RSS action.
> The lookup for hash RX queue object inside shared action is performed by hash
> only.
>
> Current implementation limited to DV flow driver operations i.e. verbs flow
> driver operations doesn't support shared action.
>
> Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [dpdk-dev] [PATCH v2 3/4] net/mlx5: shared action PMD
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 3/4] net/mlx5: shared action PMD Andrey Vesnovaty
@ 2020-10-23 14:17 ` Slava Ovsiienko
0 siblings, 0 replies; 19+ messages in thread
From: Slava Ovsiienko @ 2020-10-23 14:17 UTC (permalink / raw)
To: Andrey Vesnovaty, dev
Cc: jer, jerinjacobk, NBU-Contact-Thomas Monjalon, ferruh.yigit,
stephen, bruce.richardson, Ori Kam, andrey.vesnovaty, mdr,
nhorman, ajit.khaparde, samik.gupta, Matan Azrad, Shahaf Shuler
> -----Original Message-----
> From: Andrey Vesnovaty <andreyv@nvidia.com>
> Sent: Friday, October 23, 2020 13:24
> To: dev@dpdk.org
> Cc: jer@marvell.com; jerinjacobk@gmail.com; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; ferruh.yigit@intel.com;
> stephen@networkplumber.org; bruce.richardson@intel.com; Ori Kam
> <orika@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>;
> andrey.vesnovaty@gmail.com; mdr@ashroe.eu; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; samik.gupta@broadcom.com; Matan Azrad
> <matan@nvidia.com>; Shahaf Shuler <shahafs@nvidia.com>
> Subject: [PATCH v2 3/4] net/mlx5: shared action PMD
>
> Handle shared action on flow validation/creation/destruction.
> mlx5 PMD translates shared action into a regular one before handling flow
> validation/creation. The shared action translation applied to utilize the same
> execution path for both shared and regular actions.
> The current implementation supports shared action translation for shared RSS
> action only.
>
> RSS action validation split to validate shared RSS action on its creation in
> addition to action validation in flow validation/creation path.
>
> Implement rte_flow shared action API for mlx5 PMD, mostly forwarding calls to
> flow driver operations (see struct mlx5_flow_driver_ops).
>
> Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [dpdk-dev] [PATCH v2 0/4] Shared action RSS PMD impl
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 0/4] Shared action RSS PMD impl Andrey Vesnovaty
` (3 preceding siblings ...)
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 4/4] net/mlx5: driver support for shared action Andrey Vesnovaty
@ 2020-10-25 12:43 ` Raslan Darawsheh
4 siblings, 0 replies; 19+ messages in thread
From: Raslan Darawsheh @ 2020-10-25 12:43 UTC (permalink / raw)
To: Andrey Vesnovaty, dev
Cc: jer, jerinjacobk, NBU-Contact-Thomas Monjalon, ferruh.yigit,
stephen, bruce.richardson, Ori Kam, Slava Ovsiienko,
andrey.vesnovaty, mdr, nhorman, ajit.khaparde, samik.gupta
Hi,
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Andrey Vesnovaty
> Sent: Friday, October 23, 2020 1:24 PM
> To: dev@dpdk.org
> Cc: jer@marvell.com; jerinjacobk@gmail.com; NBU-Contact-Thomas
> Monjalon <thomas@monjalon.net>; ferruh.yigit@intel.com;
> stephen@networkplumber.org; bruce.richardson@intel.com; Ori Kam
> <orika@nvidia.com>; Slava Ovsiienko <viacheslavo@nvidia.com>;
> andrey.vesnovaty@gmail.com; mdr@ashroe.eu; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; samik.gupta@broadcom.com
> Subject: [dpdk-dev] [PATCH v2 0/4] Shared action RSS PMD impl
>
> This patchset introduces Mellanox PMD implementation for shared RSS
> action. It was part of the 'RTE flow shared action API' patchset [1].
> After v3 the ptchset was split to RTE flow layer [2] and PMD
> implementation (this patchset).
>
> PMD implementation of this patchset is based on RTE flow API [3].
>
> v2 changes (v1 was a draft):
> * lots fo cosmetic changes
> * fix spelling/rephrases in comments and commit messages
> * fix code styling issues
> * code cleanups
> * bugfix: prevent non shared action modification
>
> [1] RTE flow shared action API v1
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Finbox
> .dpdk.org%2Fdev%2F20200702120511.16315-1-
> andreyv%40mellanox.com%2F&data=04%7C01%7Crasland%40nvidia.co
> m%7C7b680d1b14f54e64ea6308d8773ddcb4%7C43083d15727340c1b7db39ef
> d9ccc17a%7C0%7C0%7C637390454872691372%7CUnknown%7CTWFpbGZsb3
> d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0
> %3D%7C1000&sdata=vt5%2FEROn9p1F2g%2FklmD%2Fr5TqThYS4ldui6y
> wdK51cdc%3D&reserved=0
> [2] RTE flow shared action API v4
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Finbox
> .dpdk.org%2Fdev%2F20201006200835.30017-1-
> andreyv%40nvidia.com%2F&data=04%7C01%7Crasland%40nvidia.com%
> 7C7b680d1b14f54e64ea6308d8773ddcb4%7C43083d15727340c1b7db39efd9cc
> c17a%7C0%7C0%7C637390454872691372%7CUnknown%7CTWFpbGZsb3d8ey
> JWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%
> 7C1000&sdata=1ats9R%2BuNxzEakcT3BhwaWc3xk6Swv33WVm5q11Hc
> %2FQ%3D&reserved=0
> [3] RTE flow shared action API v8
> https://nam11.safelinks.protection.outlook.com/?url=http%3A%2F%2Finbox
> .dpdk.org%2Fdev%2F20201014114015.17197-1-
> andreyv%40nvidia.com%2F&data=04%7C01%7Crasland%40nvidia.com%
> 7C7b680d1b14f54e64ea6308d8773ddcb4%7C43083d15727340c1b7db39efd9cc
> c17a%7C0%7C0%7C637390454872691372%7CUnknown%7CTWFpbGZsb3d8ey
> JWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%
> 7C1000&sdata=HyQu2GHjHbtOJcSueXb9i9wrZ%2Fz5BxUN2pOOQ5TV4e
> c%3D&reserved=0
>
> Andrey Vesnovaty (4):
> common/mlx5: modify advanced Rx object via DevX
> net/mlx5: modify hash Rx queue objects
> net/mlx5: shared action PMD
> net/mlx5: driver support for shared action
>
> drivers/common/mlx5/mlx5_devx_cmds.c | 84 ++++
> drivers/common/mlx5/mlx5_devx_cmds.h | 10 +
> drivers/common/mlx5/mlx5_prm.h | 29 ++
> drivers/common/mlx5/version.map | 1 +
> drivers/net/mlx5/mlx5.c | 1 +
> drivers/net/mlx5/mlx5.h | 7 +
> drivers/net/mlx5/mlx5_defs.h | 3 +
> drivers/net/mlx5/mlx5_devx.c | 151 ++++--
> drivers/net/mlx5/mlx5_flow.c | 499 +++++++++++++++++--
> drivers/net/mlx5/mlx5_flow.h | 86 ++++
> drivers/net/mlx5/mlx5_flow_dv.c | 705 +++++++++++++++++++++++++-
> -
> drivers/net/mlx5/mlx5_flow_verbs.c | 3 +-
> drivers/net/mlx5/mlx5_rxq.c | 110 ++++-
> drivers/net/mlx5/mlx5_rxtx.h | 7 +-
> 14 files changed, 1596 insertions(+), 100 deletions(-)
>
> --
> 2.26.2
Series applied to next-net-mlx,
With small comment that you are still using rte_atomic operation but since we have a commitment on changing this for all MLX PMD's need to take into consideration this one as well.
Kindest regards,
Raslan Darawsheh
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/mlx5: driver support for shared action
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 4/4] net/mlx5: driver support for shared action Andrey Vesnovaty
2020-10-23 14:17 ` Slava Ovsiienko
@ 2020-10-26 16:38 ` Ferruh Yigit
2020-10-26 16:40 ` Thomas Monjalon
2020-10-26 16:40 ` Slava Ovsiienko
1 sibling, 2 replies; 19+ messages in thread
From: Ferruh Yigit @ 2020-10-26 16:38 UTC (permalink / raw)
To: Andrey Vesnovaty, dev
Cc: jer, jerinjacobk, thomas, stephen, bruce.richardson, orika,
viacheslavo, andrey.vesnovaty, mdr, nhorman, ajit.khaparde,
samik.gupta, Matan Azrad, Shahaf Shuler
On 10/23/2020 11:24 AM, Andrey Vesnovaty wrote:
> Implement shared action create/destroy/update/query. The current
> implementation support is limited to shared RSS action only. The shared
> RSS action create operation prepares hash RX queue objects for all
> supported permutations of the hash. The shared RSS action update
> operation relies on functionality to modify hash RX queue introduced in
> one of the previous commits in this patch series.
>
> Implement RSS shared action and handle shared RSS on flow apply and
> release. The lookup for hash RX queue object for RSS action is limited
> to the set of objects stored in the shared action itself and when
> handling shared RSS action. The lookup for hash RX queue object inside
> shared action is performed by hash only.
>
> Current implementation limited to DV flow driver operations i.e. verbs
> flow driver operations doesn't support shared action.
>
> Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
<...>
> +static uint32_t
> +__flow_dv_rss_get_hrxq(struct rte_eth_dev *dev, struct rte_flow *flow,
> + struct mlx5_flow *dev_flow,
> + struct mlx5_hrxq **hrxq)
> +{
> + struct mlx5_priv *priv = dev->data->dev_private;
> + uint32_t hrxq_idx;
> +
> + if (flow->shared_rss) {
> + hrxq_idx = __flow_dv_action_rss_hrxq_lookup
> + (flow->shared_rss, dev_flow->hash_fields,
> + !!(dev_flow->handle->layers &
> + MLX5_FLOW_LAYER_TUNNEL));
> + if (hrxq_idx) {
> + *hrxq = mlx5_ipool_get(priv->sh->ipool[MLX5_IPOOL_HRXQ],
> + hrxq_idx);
> + rte_atomic32_inc(&(*hrxq)->refcnt);
I remember adding more 'rte_atomicNN_xxx' to driver has been discussed before,
and it has been mentioned that a seperate commit will be done to replace all
instances, I would like to remind it, and is that work planned for the -rc2?
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/mlx5: driver support for shared action
2020-10-26 16:38 ` Ferruh Yigit
@ 2020-10-26 16:40 ` Thomas Monjalon
2020-10-26 22:33 ` Asaf Penso
2020-10-26 16:40 ` Slava Ovsiienko
1 sibling, 1 reply; 19+ messages in thread
From: Thomas Monjalon @ 2020-10-26 16:40 UTC (permalink / raw)
To: Andrey Vesnovaty, Ferruh Yigit, viacheslavo, Matan Azrad, asafp
Cc: dev, jerinjacobk, stephen, bruce.richardson, orika,
andrey.vesnovaty, mdr, ajit.khaparde, samik.gupta, Shahaf Shuler
26/10/2020 17:38, Ferruh Yigit:
> On 10/23/2020 11:24 AM, Andrey Vesnovaty wrote:
> > + rte_atomic32_inc(&(*hrxq)->refcnt);
>
> I remember adding more 'rte_atomicNN_xxx' to driver has been discussed before,
> and it has been mentioned that a seperate commit will be done to replace all
> instances, I would like to remind it, and is that work planned for the -rc2?
Adding Asaf to reply about the timeline.
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/mlx5: driver support for shared action
2020-10-26 16:38 ` Ferruh Yigit
2020-10-26 16:40 ` Thomas Monjalon
@ 2020-10-26 16:40 ` Slava Ovsiienko
1 sibling, 0 replies; 19+ messages in thread
From: Slava Ovsiienko @ 2020-10-26 16:40 UTC (permalink / raw)
To: Ferruh Yigit, Andrey Vesnovaty, dev
Cc: jer, jerinjacobk, NBU-Contact-Thomas Monjalon, stephen,
bruce.richardson, Ori Kam, andrey.vesnovaty, mdr, nhorman,
ajit.khaparde, samik.gupta, Matan Azrad, Shahaf Shuler
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Monday, October 26, 2020 18:38
> To: Andrey Vesnovaty <andreyv@nvidia.com>; dev@dpdk.org
> Cc: jer@marvell.com; jerinjacobk@gmail.com; NBU-Contact-Thomas Monjalon
> <thomas@monjalon.net>; stephen@networkplumber.org;
> bruce.richardson@intel.com; Ori Kam <orika@nvidia.com>; Slava Ovsiienko
> <viacheslavo@nvidia.com>; andrey.vesnovaty@gmail.com; mdr@ashroe.eu;
> nhorman@tuxdriver.com; ajit.khaparde@broadcom.com;
> samik.gupta@broadcom.com; Matan Azrad <matan@nvidia.com>; Shahaf
> Shuler <shahafs@nvidia.com>
> Subject: Re: [PATCH v2 4/4] net/mlx5: driver support for shared action
>
> On 10/23/2020 11:24 AM, Andrey Vesnovaty wrote:
> > Implement shared action create/destroy/update/query. The current
> > implementation support is limited to shared RSS action only. The
> > shared RSS action create operation prepares hash RX queue objects for
> > all supported permutations of the hash. The shared RSS action update
> > operation relies on functionality to modify hash RX queue introduced
> > in one of the previous commits in this patch series.
> >
> > Implement RSS shared action and handle shared RSS on flow apply and
> > release. The lookup for hash RX queue object for RSS action is limited
> > to the set of objects stored in the shared action itself and when
> > handling shared RSS action. The lookup for hash RX queue object inside
> > shared action is performed by hash only.
> >
> > Current implementation limited to DV flow driver operations i.e. verbs
> > flow driver operations doesn't support shared action.
> >
> > Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com>
>
> <...>
>
> > +static uint32_t
> > +__flow_dv_rss_get_hrxq(struct rte_eth_dev *dev, struct rte_flow *flow,
> > + struct mlx5_flow *dev_flow,
> > + struct mlx5_hrxq **hrxq)
> > +{
> > + struct mlx5_priv *priv = dev->data->dev_private;
> > + uint32_t hrxq_idx;
> > +
> > + if (flow->shared_rss) {
> > + hrxq_idx = __flow_dv_action_rss_hrxq_lookup
> > + (flow->shared_rss, dev_flow->hash_fields,
> > + !!(dev_flow->handle->layers &
> > + MLX5_FLOW_LAYER_TUNNEL));
> > + if (hrxq_idx) {
> > + *hrxq = mlx5_ipool_get(priv->sh-
> >ipool[MLX5_IPOOL_HRXQ],
> > + hrxq_idx);
> > + rte_atomic32_inc(&(*hrxq)->refcnt);
>
> I remember adding more 'rte_atomicNN_xxx' to driver has been discussed
> before, and it has been mentioned that a seperate commit will be done to
> replace all instances, I would like to remind it, and is that work planned for the
> -rc2?
>
There is coming common patch to resolve all rte_atomic_xxx issues including mentioned ones.
We decided to keep this atomic in order not to trigger massive changes in this "shared action" patch
as not relevant to the feature.
With best regards, Slava
^ permalink raw reply [flat|nested] 19+ messages in thread
* Re: [dpdk-dev] [PATCH v2 4/4] net/mlx5: driver support for shared action
2020-10-26 16:40 ` Thomas Monjalon
@ 2020-10-26 22:33 ` Asaf Penso
0 siblings, 0 replies; 19+ messages in thread
From: Asaf Penso @ 2020-10-26 22:33 UTC (permalink / raw)
To: Andrey Vesnovaty, Ferruh Yigit, Slava Ovsiienko, Matan Azrad,
NBU-Contact-Thomas Monjalon
Cc: dev, jerinjacobk, stephen, bruce.richardson, Ori Kam,
andrey.vesnovaty, mdr, ajit.khaparde, samik.gupta, Shahaf Shuler
As Slava mentioned we are already working on a patch to align all pmd calls. It should be in the ML soon. We target it for rc2.
Regards,
Asaf Penso
________________________________
From: Thomas Monjalon <thomas@monjalon.net>
Sent: Monday, October 26, 2020 6:40:13 PM
To: Andrey Vesnovaty <andreyv@nvidia.com>; Ferruh Yigit <ferruh.yigit@intel.com>; Slava Ovsiienko <viacheslavo@nvidia.com>; Matan Azrad <matan@nvidia.com>; Asaf Penso <asafp@nvidia.com>
Cc: dev@dpdk.org <dev@dpdk.org>; jerinjacobk@gmail.com <jerinjacobk@gmail.com>; stephen@networkplumber.org <stephen@networkplumber.org>; bruce.richardson@intel.com <bruce.richardson@intel.com>; Ori Kam <orika@nvidia.com>; andrey.vesnovaty@gmail.com <andrey.vesnovaty@gmail.com>; mdr@ashroe.eu <mdr@ashroe.eu>; ajit.khaparde@broadcom.com <ajit.khaparde@broadcom.com>; samik.gupta@broadcom.com <samik.gupta@broadcom.com>; Shahaf Shuler <shahafs@nvidia.com>
Subject: Re: [PATCH v2 4/4] net/mlx5: driver support for shared action
26/10/2020 17:38, Ferruh Yigit:
> On 10/23/2020 11:24 AM, Andrey Vesnovaty wrote:
> > + rte_atomic32_inc(&(*hrxq)->refcnt);
>
> I remember adding more 'rte_atomicNN_xxx' to driver has been discussed before,
> and it has been mentioned that a seperate commit will be done to replace all
> instances, I would like to remind it, and is that work planned for the -rc2?
Adding Asaf to reply about the timeline.
^ permalink raw reply [flat|nested] 19+ messages in thread
end of thread, other threads:[~2020-10-26 22:34 UTC | newest]
Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-08 12:18 [dpdk-dev] [PATCH 0/4] Shared action RSS PMD impl Andrey Vesnovaty
2020-10-08 12:18 ` [dpdk-dev] [PATCH 1/4] common/mlx5: modify advanced Rx object via DevX Andrey Vesnovaty
2020-10-08 12:18 ` [dpdk-dev] [PATCH 2/4] net/mlx5: modify hash Rx queue objects Andrey Vesnovaty
2020-10-08 12:18 ` [dpdk-dev] [PATCH 3/4] net/mlx5: shared action PMD Andrey Vesnovaty
2020-10-08 12:18 ` [dpdk-dev] [PATCH 4/4] net/mlx5: driver support for shared action Andrey Vesnovaty
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 0/4] Shared action RSS PMD impl Andrey Vesnovaty
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 1/4] common/mlx5: modify advanced Rx object via DevX Andrey Vesnovaty
2020-10-23 14:16 ` Slava Ovsiienko
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 2/4] net/mlx5: modify hash Rx queue objects Andrey Vesnovaty
2020-10-23 14:17 ` Slava Ovsiienko
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 3/4] net/mlx5: shared action PMD Andrey Vesnovaty
2020-10-23 14:17 ` Slava Ovsiienko
2020-10-23 10:24 ` [dpdk-dev] [PATCH v2 4/4] net/mlx5: driver support for shared action Andrey Vesnovaty
2020-10-23 14:17 ` Slava Ovsiienko
2020-10-26 16:38 ` Ferruh Yigit
2020-10-26 16:40 ` Thomas Monjalon
2020-10-26 22:33 ` Asaf Penso
2020-10-26 16:40 ` Slava Ovsiienko
2020-10-25 12:43 ` [dpdk-dev] [PATCH v2 0/4] Shared action RSS PMD impl Raslan Darawsheh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).