* [dpdk-dev] [PATCH 0/5] add eCPRI support in mlx5 driver
@ 2020-07-08 14:43 Bing Zhao
2020-07-08 14:43 ` [dpdk-dev] [PATCH 1/5] net/mlx5: add flow validation of eCPRI header Bing Zhao
` (5 more replies)
0 siblings, 6 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-08 14:43 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev
This patch set is to add the eCPRI support of flow rules in mlx5 PMD
driver. Right now, only eCPRI over Ethernet layer (including VLAN)
is supported. eCPRI over UDP will be supported in the future. If the
flow rule to be inserted is not supported, PMD driver will return
error to indicate the reason of the failure.
Depends-on: series-10860 ("rte_flow: introduce eCPRI item for rte_flow")
Bing Zhao (4):
net/mlx5: add flow validation of eCPRI header
net/mlx5: add flow translation of eCPRI header
net/mlx5: add flex parser devx structures
net/mlx5: create and destroy eCPRI flex parser
Netanel Gonen (1):
net/mlx5: adding Devx command for flex parsers
drivers/common/mlx5/mlx5_devx_cmds.c | 168 +++++++++++++++++++++++-
drivers/common/mlx5/mlx5_devx_cmds.h | 52 ++++++++
drivers/common/mlx5/mlx5_prm.h | 99 +++++++++++++-
drivers/common/mlx5/rte_common_mlx5_version.map | 2 +
drivers/net/mlx5/mlx5.c | 76 +++++++++++
drivers/net/mlx5/mlx5.h | 19 +++
drivers/net/mlx5/mlx5_flow.c | 106 ++++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 9 ++
drivers/net/mlx5/mlx5_flow_dv.c | 125 ++++++++++++++++++
9 files changed, 647 insertions(+), 9 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH 1/5] net/mlx5: add flow validation of eCPRI header
2020-07-08 14:43 [dpdk-dev] [PATCH 0/5] add eCPRI support in mlx5 driver Bing Zhao
@ 2020-07-08 14:43 ` Bing Zhao
2020-07-08 14:43 ` [dpdk-dev] [PATCH 2/5] net/mlx5: add flow translation " Bing Zhao
` (4 subsequent siblings)
5 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-08 14:43 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev
When creating a flow with eCPRI header item, the validation of it is
mandatory. The detailed limitations are listed below:
1. Over Ether / VLAN, ethertype must be 0xAEFE.
2. No tunnel support is described in the specification now.
3. L3 layer is only supported when L4 is UDP, see #4.
4. Over TCP is not supported from the specification, and over UDP
is not supported right now.
5. Concatenation indicator matching is not supported now.
6. No need to check the revision.
7. Only type field in the common header is mandatory, and one byte
should be matched integrally.
8. Fields in the message payload header are optional.
9. Only messages with type #0, #2 and #5 are supported now.
Some limitations are only from software right now, because there is
no need to support all the message types and variants of protocol
stack listed in the specification.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 106 +++++++++++++++++++++++++++++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 9 ++++
drivers/net/mlx5/mlx5_flow_dv.c | 22 +++++++++
3 files changed, 136 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index ae5ccc2..9309603 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1227,11 +1227,17 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
"rss action not supported for "
"egress");
- if (rss->level > 1 && !tunnel)
+ if (rss->level > 1 && !tunnel)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
"inner RSS is not supported for "
"non-tunnel flows");
+ if ((item_flags & MLX5_FLOW_LAYER_ECPRI) &&
+ !(item_flags & MLX5_FLOW_LAYER_INNER_L4_UDP)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
+ "RSS on eCPRI is not supported now");
+ }
return 0;
}
@@ -1597,6 +1603,10 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
* Item specification.
* @param[in] item_flags
* Bit-fields that holds the items detected until now.
+ * @param[in] last_item
+ * Previous validated item in the pattern items.
+ * @param[in] ether_type
+ * Type in the ethernet layer header (including dot1q).
* @param[in] acc_mask
* Acceptable mask, if NULL default internal default mask
* will be used to check whether item fields are supported.
@@ -1695,6 +1705,10 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
* Item specification.
* @param[in] item_flags
* Bit-fields that holds the items detected until now.
+ * @param[in] last_item
+ * Previous validated item in the pattern items.
+ * @param[in] ether_type
+ * Type in the ethernet layer header (including dot1q).
* @param[in] acc_mask
* Acceptable mask, if NULL default internal default mask
* will be used to check whether item fields are supported.
@@ -2357,6 +2371,96 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
return 0;
}
+/**
+ * Validate eCPRI item.
+ *
+ * @param[in] item
+ * Item specification.
+ * @param[in] item_flags
+ * Bit-fields that holds the items detected until now.
+ * @param[in] last_item
+ * Previous validated item in the pattern items.
+ * @param[in] ether_type
+ * Type in the ethernet layer header (including dot1q).
+ * @param[in] acc_mask
+ * Acceptable mask, if NULL default internal default mask
+ * will be used to check whether item fields are supported.
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_flow_validate_item_ecpri(const struct rte_flow_item *item,
+ uint64_t item_flags,
+ uint64_t last_item,
+ uint16_t ether_type,
+ const struct rte_flow_item_ecpri *acc_mask,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_ecpri *mask = item->mask;
+ const struct rte_flow_item_ecpri nic_mask = {
+ .hdr = {
+ .dw0 = RTE_BE32(((const struct rte_ecpri_msg_hdr) {
+ .common = {
+ .type = 0xFF,
+ },
+ }).dw0),
+ .dummy[0] = 0xFFFFFFFF,
+ },
+ };
+ const uint64_t outer_l2_vlan = (MLX5_FLOW_LAYER_OUTER_L2 |
+ MLX5_FLOW_LAYER_OUTER_VLAN);
+ struct rte_flow_item_ecpri mask_lo;
+
+ if ((last_item & outer_l2_vlan) && ether_type &&
+ ether_type != RTE_ETHER_TYPE_ECPRI)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "eCPRI cannot follow L2/VLAN layer "
+ "which ether type is not 0xAEFE.");
+ if (item_flags & MLX5_FLOW_LAYER_TUNNEL)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "eCPRI with tunnel is not supported "
+ "right now.");
+ if (item_flags & MLX5_FLOW_LAYER_OUTER_L3)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "multiple L3 layers not supported");
+ else if (item_flags & MLX5_FLOW_LAYER_OUTER_L4_TCP)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "eCPRI cannot follow a TCP layer.");
+ /* In specification, eCPRI could be over UDP layer. */
+ else if (item_flags & MLX5_FLOW_LAYER_OUTER_L4_UDP)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "eCPRI over UDP layer is not yet"
+ "supported right now.");
+ /* Mask for type field in common header could be zero. */
+ if (!mask)
+ mask = &rte_flow_item_ecpri_mask;
+ mask_lo.hdr.dw0 = rte_be_to_cpu_32(mask->hdr.dw0);
+ /* Input mask is in big-endian format. */
+ if (mask_lo.hdr.common.type != 0 && mask_lo.hdr.common.type != 0xff)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
+ "partial mask is not supported "
+ "for protocol");
+ else if (mask_lo.hdr.common.type == 0 && mask->hdr.dummy[0] != 0)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
+ "message header mask must be after "
+ "a type mask");
+ return mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
+ acc_mask ? (const uint8_t *)acc_mask
+ : (const uint8_t *)&nic_mask,
+ sizeof(struct rte_flow_item_ecpri),
+ error);
+}
+
/* Allocate unique ID for the split Q/RSS subflows. */
static uint32_t
flow_qrss_get_id(struct rte_eth_dev *dev)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 43cbda8..6dfeef3 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -128,6 +128,9 @@ enum mlx5_feature_name {
/* Pattern tunnel Layer bits (continued). */
#define MLX5_FLOW_LAYER_GTP (1u << 28)
+/* Pattern eCPRI Layer bit. */
+#define MLX5_FLOW_LAYER_ECPRI (UINT64_C(1) << 29)
+
/* Outer Masks. */
#define MLX5_FLOW_LAYER_OUTER_L3 \
(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6)
@@ -1027,6 +1030,12 @@ int mlx5_flow_validate_item_geneve(const struct rte_flow_item *item,
uint64_t item_flags,
struct rte_eth_dev *dev,
struct rte_flow_error *error);
+int mlx5_flow_validate_item_ecpri(const struct rte_flow_item *item,
+ uint64_t item_flags,
+ uint64_t last_item,
+ uint16_t ether_type,
+ const struct rte_flow_item_ecpri *acc_mask,
+ struct rte_flow_error *error);
struct mlx5_meter_domains_infos *mlx5_flow_create_mtr_tbls
(struct rte_eth_dev *dev,
const struct mlx5_flow_meter *fm);
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 8b5b683..6711c79 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -4923,6 +4923,16 @@ struct field_modify_info modify_tcp[] = {
.hop_limits = 0xff,
},
};
+ const struct rte_flow_item_ecpri nic_ecpri_mask = {
+ .hdr = {
+ .dw0 = RTE_BE32(((const struct rte_ecpri_msg_hdr) {
+ .common = {
+ .type = 0xFF,
+ },
+ }).dw0),
+ .dummy[0] = 0xffffffff,
+ },
+ };
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_dev_config *dev_conf = &priv->config;
uint16_t queue_index = 0xFFFF;
@@ -5173,6 +5183,17 @@ struct field_modify_info modify_tcp[] = {
return ret;
last_item = MLX5_FLOW_LAYER_GTP;
break;
+ case RTE_FLOW_ITEM_TYPE_ECPRI:
+ /* Capacity will be checked in the translate stage. */
+ ret = mlx5_flow_validate_item_ecpri(items, item_flags,
+ last_item,
+ ether_type,
+ &nic_ecpri_mask,
+ error);
+ if (ret < 0)
+ return ret;
+ last_item = MLX5_FLOW_LAYER_ECPRI;
+ break;
default:
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ITEM,
@@ -5882,6 +5903,7 @@ struct field_modify_info modify_tcp[] = {
* Set match on ethertype only if ETH header is not followed by VLAN.
* HW is optimized for IPv4/IPv6. In such cases, avoid setting
* ethertype, and use ip_version field instead.
+ * eCPRI over Ether layer will use type value 0xAEFE.
*/
if (eth_v->type == RTE_BE16(RTE_ETHER_TYPE_IPV4) &&
eth_m->type == 0xFFFF) {
--
1.8.3.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH 2/5] net/mlx5: add flow translation of eCPRI header
2020-07-08 14:43 [dpdk-dev] [PATCH 0/5] add eCPRI support in mlx5 driver Bing Zhao
2020-07-08 14:43 ` [dpdk-dev] [PATCH 1/5] net/mlx5: add flow validation of eCPRI header Bing Zhao
@ 2020-07-08 14:43 ` Bing Zhao
2020-07-09 12:22 ` Thomas Monjalon
2020-07-08 14:43 ` [dpdk-dev] [PATCH 3/5] net/mlx5: add flex parser devx structures Bing Zhao
` (3 subsequent siblings)
5 siblings, 1 reply; 40+ messages in thread
From: Bing Zhao @ 2020-07-08 14:43 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev
In the translation stage, the eCPRI item should be translated into
the format that lower layer driver could use. All the fields that
need to matched must be in network byte order after translation, as
well as the mask. Since the header in the item belongs to the network
layers stack, and the input parameter of the header is considered to
be in big-endian format already.
Base on the definition in the PRM, the DW samples will be used for
matching in the FTE/STE. Now, the type field and only the PC ID, RTC
ID, and DLY MSR ID of the payload will be supported. The masks should
be 00 ff 00 00 ff ff(00) 00 00 in the network order. Two DWs are
needed to support such matching. The mask fields could be zeros to
support some wildcard rules. But it makes no sense to support the
rule matching only on the payload but without matching type filed.
The DW samples should be stored after the flex parser creation for
eCPRI. There is no need to query the sample IDs each time when
creating a flow rule with eCPRI item. It will not introduce
insertion rate degradation significantly.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
drivers/common/mlx5/mlx5_prm.h | 16 ++++-
drivers/net/mlx5/mlx5.h | 15 +++++
drivers/net/mlx5/mlx5_flow_dv.c | 130 ++++++++++++++++++++++++++++++++++++++++
3 files changed, 160 insertions(+), 1 deletion(-)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index c63795f..decc63d 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -709,6 +709,18 @@ struct mlx5_ifc_fte_match_set_misc3_bits {
u8 reserved_at_170[0x90];
};
+struct mlx5_ifc_fte_match_set_misc4_bits {
+ u8 prog_sample_field_value_0[0x20];
+ u8 prog_sample_field_id_0[0x20];
+ u8 prog_sample_field_value_1[0x20];
+ u8 prog_sample_field_id_1[0x20];
+ u8 prog_sample_field_value_2[0x20];
+ u8 prog_sample_field_id_2[0x20];
+ u8 prog_sample_field_value_3[0x20];
+ u8 prog_sample_field_id_3[0x20];
+ u8 reserved_at_100[0x100];
+};
+
/* Flow matcher. */
struct mlx5_ifc_fte_match_param_bits {
struct mlx5_ifc_fte_match_set_lyr_2_4_bits outer_headers;
@@ -716,6 +728,7 @@ struct mlx5_ifc_fte_match_param_bits {
struct mlx5_ifc_fte_match_set_lyr_2_4_bits inner_headers;
struct mlx5_ifc_fte_match_set_misc2_bits misc_parameters_2;
struct mlx5_ifc_fte_match_set_misc3_bits misc_parameters_3;
+ struct mlx5_ifc_fte_match_set_misc4_bits misc_parameters_4;
};
enum {
@@ -723,7 +736,8 @@ enum {
MLX5_MATCH_CRITERIA_ENABLE_MISC_BIT,
MLX5_MATCH_CRITERIA_ENABLE_INNER_BIT,
MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT,
- MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT
+ MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT,
+ MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT
};
enum {
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 46e66eb..51775ca 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -529,6 +529,19 @@ struct mlx5_flow_id_pool {
uint32_t max_id; /**< Maximum id can be allocated from the pool. */
};
+/* Supported flex parser profile ID. */
+enum mlx5_flex_parser_profile_id {
+ MLX5_FLEX_PARSER_ECPRI_0 = 0,
+ MLX5_FLEX_PARSER_MAX = 8,
+};
+
+/* Sample ID information of flex parser structure. */
+struct mlx5_flex_parser_profiles {
+ uint32_t ids[4]; /* Sample IDs for this profile. */
+ void *obj; /* Flex parser node object. */
+ uint32_t num; /* Actual number of samples. */
+};
+
/*
* Shared Infiniband device context for Master/Representors
* which belong to same IB device with multiple IB ports.
@@ -579,6 +592,8 @@ struct mlx5_dev_ctx_shared {
struct mlx5_devx_obj *tis; /* TIS object. */
struct mlx5_devx_obj *td; /* Transport domain. */
struct mlx5_flow_id_pool *flow_id_pool; /* Flow ID pool. */
+ struct mlx5_flex_parser_profiles fp[MLX5_FLEX_PARSER_MAX];
+ /* Flex parser profiles information. */
struct mlx5_dev_shared_port port[]; /* per device port data array. */
};
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 6711c79..523d778 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -4779,6 +4779,34 @@ struct field_modify_info modify_tcp[] = {
cnt, next);
}
+static inline bool
+flow_dv_flex_parser_ecpri_exist(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_flex_parser_profiles *prf =
+ &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
+
+ return !!prf->obj;
+}
+
+
+static int
+flow_dv_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_flex_parser_profiles *prf =
+ &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
+
+ return 0;
+}
+
+static void
+flow_dv_flex_parser_ecpri_release(struct rte_eth_dev *dev)
+{
+ (void)dev;
+ return;
+}
+
/**
* Verify the @p attributes will be correctly understood by the NIC and store
* them in the @p flow if everything is correct.
@@ -7258,6 +7286,90 @@ struct field_modify_info modify_tcp[] = {
rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid));
}
+/**
+ * Add eCPRI item to matcher and to the value.
+ *
+ * @param[in] dev
+ * The devich to configure through.
+ * @param[in, out] matcher
+ * Flow matcher.
+ * @param[in, out] key
+ * Flow matcher value.
+ * @param[in] item
+ * Flow pattern to translate.
+ * @param[in] samples
+ * Sample IDs to be used in the matching.
+ */
+static void
+flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher,
+ void *key, const struct rte_flow_item *item)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ const struct rte_flow_item_ecpri *ecpri_m = item->mask;
+ const struct rte_flow_item_ecpri *ecpri_v = item->spec;
+ void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher,
+ misc_parameters_4);
+ void *misc4_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_4);
+ uint32_t *samples;
+ void *dw_m;
+ void *dw_v;
+
+ if (!ecpri_v)
+ return;
+ if (!ecpri_m)
+ ecpri_m = &rte_flow_item_ecpri_mask;
+ /*
+ * Maximal four DW samples are supported in a single matching now.
+ * Two are used now for a eCPRI matching:
+ * 1. Type: one byte, mask should be 0x00ff0000 in network order
+ * 2. ID of a message: one or two bytes, mask 0xffff0000 or 0xff000000
+ * if any.
+ */
+ if (!ecpri_m->hdr.common.type)
+ return;
+ samples = priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0].ids;
+ /* Need to take the whole DW as the mask to fill the entry. */
+ dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m,
+ prog_sample_field_value_0);
+ dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v,
+ prog_sample_field_value_0);
+ /* Already big endian (network order) in the header. */
+ *(uint32_t *)dw_m = ecpri_m->hdr.dw0;
+ *(uint32_t *)dw_v = ecpri_v->hdr.dw0;
+ /* Sample#0, used for matching type, offset 0. */
+ MLX5_SET(fte_match_set_misc4, misc4_m,
+ prog_sample_field_id_0, samples[0]);
+ /* It makes no sense to set the sample ID in the mask field. */
+ MLX5_SET(fte_match_set_misc4, misc4_v,
+ prog_sample_field_id_0, samples[0]);
+ /*
+ * Checking if message body part needs to be matched.
+ * Some wildcard rules only matching type field should be supported.
+ */
+ if (ecpri_m->hdr.dummy[0]) {
+ switch (ecpri_v->hdr.common.type) {
+ case RTE_ECPRI_MSG_TYPE_IQ_DATA:
+ case RTE_ECPRI_MSG_TYPE_RTC_CTRL:
+ case RTE_ECPRI_MSG_TYPE_DLY_MSR:
+ dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m,
+ prog_sample_field_value_1);
+ dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v,
+ prog_sample_field_value_1);
+ *(uint32_t *)dw_m = ecpri_m->hdr.dummy[0];
+ *(uint32_t *)dw_v = ecpri_v->hdr.dummy[0];
+ /* Sample#1, to match message body, offset 4. */
+ MLX5_SET(fte_match_set_misc4, misc4_m,
+ prog_sample_field_id_1, samples[1]);
+ MLX5_SET(fte_match_set_misc4, misc4_v,
+ prog_sample_field_id_1, samples[1]);
+ break;
+ default:
+ /* Others, do not match any sample ID. */
+ break;
+ }
+ }
+}
+
static uint32_t matcher_zero[MLX5_ST_SZ_DW(fte_match_param)] = { 0 };
#define HEADER_IS_ZERO(match_criteria, headers) \
@@ -7293,6 +7405,9 @@ struct field_modify_info modify_tcp[] = {
match_criteria_enable |=
(!HEADER_IS_ZERO(match_criteria, misc_parameters_3)) <<
MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT;
+ match_criteria_enable |=
+ (!HEADER_IS_ZERO(match_criteria, misc_parameters_4)) <<
+ MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT;
return match_criteria_enable;
}
@@ -8572,6 +8687,21 @@ struct field_modify_info modify_tcp[] = {
MLX5_PRIORITY_MAP_L2 : MLX5_PRIORITY_MAP_L4;
last_item = MLX5_FLOW_LAYER_GTP;
break;
+ case RTE_FLOW_ITEM_TYPE_ECPRI:
+ if (!flow_dv_flex_parser_ecpri_exist(dev)) {
+ ret = flow_dv_flex_parser_ecpri_alloc(dev);
+ if (ret)
+ return rte_flow_error_set
+ (error, ret,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ NULL,
+ "cannot create eCPRI parser");
+ }
+ flow_dv_translate_item_ecpri(dev, match_mask,
+ match_value, items);
+ /* No other protocol should follow eCPRI layer. */
+ last_item = MLX5_FLOW_LAYER_ECPRI;
+ break;
default:
break;
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH 3/5] net/mlx5: add flex parser devx structures
2020-07-08 14:43 [dpdk-dev] [PATCH 0/5] add eCPRI support in mlx5 driver Bing Zhao
2020-07-08 14:43 ` [dpdk-dev] [PATCH 1/5] net/mlx5: add flow validation of eCPRI header Bing Zhao
2020-07-08 14:43 ` [dpdk-dev] [PATCH 2/5] net/mlx5: add flow translation " Bing Zhao
@ 2020-07-08 14:43 ` Bing Zhao
2020-07-08 14:43 ` [dpdk-dev] [PATCH 4/5] net/mlx5: adding Devx command for flex parsers Bing Zhao
` (2 subsequent siblings)
5 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-08 14:43 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev
The structures and other definitions will be used for the dynamic
flex parser creation via Devx command interface. These structures
will be used as some some intermediate variables and input
parameters for the parser creation API.
It is better to keep all members consistent with the PRM definition
even though some of them will not be used.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
drivers/common/mlx5/mlx5_devx_cmds.h | 44 ++++++++++++++++++++++++++++++++++++
drivers/common/mlx5/mlx5_prm.h | 14 ++++++++++++
drivers/net/mlx5/mlx5.h | 5 ++--
3 files changed, 61 insertions(+), 2 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index 25704ef..faabfb1 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -299,6 +299,50 @@ struct mlx5_devx_virtio_q_couners_attr {
uint32_t invalid_buffer;
};
+/*
+ * graph flow match sample attributes structure,
+ * used by flex parser operations.
+ */
+struct mlx5_devx_match_sample_attr {
+ uint32_t flow_match_sample_en:1;
+ uint32_t flow_match_sample_field_offset:16;
+ uint32_t flow_match_sample_offset_mode:4;
+ uint32_t flow_match_sample_field_offset_mask;
+ uint32_t flow_match_sample_field_offset_shift:4;
+ uint32_t flow_match_sample_field_base_offset:8;
+ uint32_t flow_match_sample_tunnel_mode:3;
+ uint32_t flow_match_sample_field_id;
+};
+
+/* graph node arc attributes structure, used by flex parser operations. */
+struct mlx5_devx_graph_arc_attr {
+ uint32_t compare_condition_value:16;
+ uint32_t start_inner_tunnel:1;
+ uint32_t arc_parse_graph_node:8;
+ uint32_t parse_graph_node_handle;
+};
+
+/* Maximal number of samples per graph node. */
+#define MLX5_GRAPH_NODE_SAMPLE_NUM 8
+
+/* Maximal number of input/output arcs per graph node. */
+#define MLX5_GRAPH_NODE_ARC_NUM 8
+
+/* parse graph node attributes structure, used by flex parser operations. */
+struct mlx5_devx_graph_node_attr {
+ uint32_t modify_field_select;
+ uint32_t header_length_mode:4;
+ uint32_t header_length_base_value:16;
+ uint32_t header_length_field_shift:4;
+ uint32_t header_length_field_offset:16;
+ uint32_t header_length_field_mask;
+ struct mlx5_devx_match_sample_attr sample[MLX5_GRAPH_NODE_SAMPLE_NUM];
+ uint32_t next_header_field_offset:16;
+ uint32_t next_header_field_size:5;
+ struct mlx5_devx_graph_arc_attr in[MLX5_GRAPH_NODE_ARC_NUM];
+ struct mlx5_devx_graph_arc_attr out[MLX5_GRAPH_NODE_ARC_NUM];
+};
+
/* mlx5_devx_cmds.c */
__rte_internal
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index decc63d..9fed365 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -2539,6 +2539,20 @@ enum {
/* The bits meter color use. */
#define MLX5_MTR_COLOR_BITS 8
+/* Length mode of dynamic flex parser graph node. */
+enum mlx5_parse_graph_node_len_mode {
+ MLX5_GRAPH_NODE_LEN_FIXED = 0x0,
+ MLX5_GRAPH_NODE_LEN_FIELD = 0x1,
+ MLX5_GRAPH_NODE_LEN_BITMASK = 0x2,
+};
+
+/* Offset mode of the samples of flex parser. */
+enum mlx5_parse_graph_flow_match_sample_offset_mode {
+ MLX5_GRAPH_SAMPLE_OFFSET_FIXED = 0x0,
+ MLX5_GRAPH_SAMPLE_OFFSET_FIELD = 0x1,
+ MLX5_GRAPH_SAMPLE_OFFSET_BITMASK = 0x2,
+};
+
/**
* Convert a user mark to flow mark.
*
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 51775ca..07cbaf4 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -537,9 +537,10 @@ enum mlx5_flex_parser_profile_id {
/* Sample ID information of flex parser structure. */
struct mlx5_flex_parser_profiles {
- uint32_t ids[4]; /* Sample IDs for this profile. */
- void *obj; /* Flex parser node object. */
uint32_t num; /* Actual number of samples. */
+ uint32_t ids[8]; /* Sample IDs for this profile. */
+ uint8_t offset[8]; /* Bytes offset of each parser. */
+ void *obj; /* Flex parser node object. */
};
/*
--
1.8.3.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH 4/5] net/mlx5: adding Devx command for flex parsers
2020-07-08 14:43 [dpdk-dev] [PATCH 0/5] add eCPRI support in mlx5 driver Bing Zhao
` (2 preceding siblings ...)
2020-07-08 14:43 ` [dpdk-dev] [PATCH 3/5] net/mlx5: add flex parser devx structures Bing Zhao
@ 2020-07-08 14:43 ` Bing Zhao
2020-07-08 14:43 ` [dpdk-dev] [PATCH 5/5] net/mlx5: create and destroy eCPRI flex parser Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 0/7] add eCPRI support in mlx5 driver Bing Zhao
5 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-08 14:43 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, Netanel Gonen, Netanel Gonen
From: Netanel Gonen <netanelg@r-arch-host16.mtr.labs.mlnx>
In order to use dynamic flex parser to parse protocols that is not
supported natively, two steps are needed.
Firstly, creating the parse graph node. There are three parts of the
flex parser: node, arc and sample. Node is the whole structure of a
flex parser, when creating, the length of the protocol should be
specified. Then the input arc(s) is(are) mandatory, it will tell the
HW when to use this parser to parse the packet. For a single parser
node, up to 8 input arcs could be supported and it gives SW ability
to support this protocol over multiple layers. The output arc is
optional and also up to 8 arcs could be supported. If the protocol
is the last header of the stack, then output arc should be NULL. Or
else it should be specified. The protocol type in the arc is used to
indicate the parser pointing to or from this flex parser node. For
output arc, the next header type field offset and size should be set
in the node structure, then the HW could get the proper type of the
next header and decide which parser to point to.
Note: the parsers have two types now, native parser and flex parser.
The arc between two flex parsers are not supported in this stage.
Secondly, querying the sample IDs. If the protocol header parsed
with flex parser needs to used in flow rule offloading, the DW
samples are needed when creating the parse graph node. The offset
of bytes starting from the header needs to be set. After creating
the node successfully, a general object handle will be returned.
This object could be queryed with Devx command to get the sample
IDs.
When creating a flow, sample IDs could be used to sample a DW from
the parsed header - 4 continuous bytes starting from the offset. The
flow entry could specify some mask to use part of this DW for
matching. Up to 8 samples could be supported for a single parse
graph node. The offset should not exceed the header length.
The HW resources have some limitation, low layer driver error should
be checked once there is a failure of creating parse graph node.
Signed-off-by: Netanel Gonen <netanelg@mellanox.com>
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
drivers/common/mlx5/mlx5_devx_cmds.c | 168 +++++++++++++++++++++++-
drivers/common/mlx5/mlx5_devx_cmds.h | 8 ++
drivers/common/mlx5/mlx5_prm.h | 69 +++++++++-
drivers/common/mlx5/rte_common_mlx5_version.map | 2 +
4 files changed, 240 insertions(+), 7 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index ec92eb6..4bad466 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -396,6 +396,165 @@ struct mlx5_devx_obj *
}
}
+int
+mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
+ uint32_t ids[], uint32_t num)
+{
+ uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0};
+ uint32_t out[MLX5_ST_SZ_DW(create_flex_parser_out)] = {0};
+ void *hdr = MLX5_ADDR_OF(create_flex_parser_out, in, hdr);
+ void *flex = MLX5_ADDR_OF(create_flex_parser_out, out, flex);
+ void *sample = MLX5_ADDR_OF(parse_graph_flex, flex, sample_table);
+ int ret;
+ uint32_t idx = 0;
+ uint32_t i;
+
+ if (num > 8) {
+ rte_errno = EINVAL;
+ DRV_LOG(ERR, "Too many sample IDs to be fetched.");
+ return -rte_errno;
+ }
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+ MLX5_CMD_OP_QUERY_GENERAL_OBJECT);
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+ MLX5_GENERAL_OBJ_TYPE_FLEX_PARSE_GRAPH);
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_id, flex_obj->id);
+ ret = mlx5_glue->devx_obj_query(flex_obj->obj, in, sizeof(in),
+ out, sizeof(out));
+ if (ret) {
+ rte_errno = errno;
+ DRV_LOG(ERR, "Failed to query sample IDs with object %p.",
+ flex_obj);
+ return -rte_errno;
+ }
+ for (i = 0; i < 8; i++) {
+ void *s_off = (void *)((char *)sample + i *
+ MLX5_ST_SZ_BYTES(parse_graph_flow_match_sample));
+ uint32_t en;
+
+ en = MLX5_GET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_en);
+ if (!en)
+ continue;
+ ids[idx++] = MLX5_GET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_id);
+ }
+ if (num != idx) {
+ rte_errno = EINVAL;
+ DRV_LOG(ERR, "Number of sample IDs are not as expected.");
+ return -rte_errno;
+ }
+ return ret;
+}
+
+
+struct mlx5_devx_obj *
+mlx5_devx_cmd_create_flex_parser(void* ctx,
+ struct mlx5_devx_graph_node_attr *data)
+{
+ uint32_t in[MLX5_ST_SZ_DW(create_flex_parser_in)] = {0};
+ uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0};
+ void *hdr = MLX5_ADDR_OF(create_flex_parser_in, in, hdr);
+ void *flex = MLX5_ADDR_OF(create_flex_parser_in, in, flex);
+ void *sample = MLX5_ADDR_OF(parse_graph_flex, flex, sample_table);
+ void *in_arc = MLX5_ADDR_OF(parse_graph_flex, flex, input_arc);
+ void *out_arc = MLX5_ADDR_OF(parse_graph_flex, flex, output_arc);
+ struct mlx5_devx_obj *parse_flex_obj = NULL;
+ uint32_t i;
+
+ parse_flex_obj = rte_calloc(__func__, 1, sizeof(*parse_flex_obj), 0);
+ if (!parse_flex_obj) {
+ DRV_LOG(ERR, "Failed to allocate flex parser data");
+ rte_errno = ENOMEM;
+ rte_free(in);
+ return NULL;
+ }
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+ MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+ MLX5_GENERAL_OBJ_TYPE_FLEX_PARSE_GRAPH);
+ MLX5_SET(parse_graph_flex, flex, header_length_mode,
+ data->header_length_mode);
+ MLX5_SET(parse_graph_flex, flex, header_length_base_value,
+ data->header_length_base_value);
+ MLX5_SET(parse_graph_flex, flex, header_length_field_offset,
+ data->header_length_field_offset);
+ MLX5_SET(parse_graph_flex, flex, header_length_field_shift,
+ data->header_length_field_shift);
+ MLX5_SET(parse_graph_flex, flex, header_length_field_mask,
+ data->header_length_field_mask);
+ for (i = 0; i < 8; i++) {
+ struct mlx5_devx_match_sample_attr *s = &data->sample[i];
+ void *s_off = (void *)((char *)sample + i *
+ MLX5_ST_SZ_BYTES(parse_graph_flow_match_sample));
+
+ if (!s->flow_match_sample_en)
+ continue;
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_en, !!s->flow_match_sample_en);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_offset,
+ s->flow_match_sample_field_offset);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_offset_mode,
+ s->flow_match_sample_offset_mode);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_offset_mask,
+ s->flow_match_sample_field_offset_mask);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_offset_shift,
+ s->flow_match_sample_field_offset_shift);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_base_offset,
+ s->flow_match_sample_field_base_offset);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_tunnel_mode,
+ s->flow_match_sample_tunnel_mode);
+ }
+ for (i = 0; i < 8; i++) {
+ struct mlx5_devx_graph_arc_attr *ia = &data->in[i];
+ struct mlx5_devx_graph_arc_attr *oa = &data->out[i];
+ void *in_off = (void *)((char *)in_arc + i *
+ MLX5_ST_SZ_BYTES(parse_graph_arc));
+ void *out_off = (void *)((char *)out_arc + i *
+ MLX5_ST_SZ_BYTES(parse_graph_arc));
+
+ if (ia->arc_parse_graph_node != 0) {
+ MLX5_SET(parse_graph_arc, in_off,
+ compare_condition_value,
+ ia->compare_condition_value);
+ MLX5_SET(parse_graph_arc, in_off, start_inner_tunnel,
+ ia->start_inner_tunnel);
+ MLX5_SET(parse_graph_arc, in_off, arc_parse_graph_node,
+ ia->arc_parse_graph_node);
+ MLX5_SET(parse_graph_arc, in_off, parse_graph_node_handle,
+ ia->parse_graph_node_handle);
+ }
+ if (oa->arc_parse_graph_node != 0) {
+ MLX5_SET(parse_graph_arc, out_off,
+ compare_condition_value,
+ oa->compare_condition_value);
+ MLX5_SET(parse_graph_arc, out_off, start_inner_tunnel,
+ oa->start_inner_tunnel);
+ MLX5_SET(parse_graph_arc, out_off, arc_parse_graph_node,
+ oa->arc_parse_graph_node);
+ MLX5_SET(parse_graph_arc, out_off, parse_graph_node_handle,
+ oa->parse_graph_node_handle);
+ }
+ }
+ parse_flex_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in),
+ out, sizeof(out));
+ if (!parse_flex_obj->obj) {
+ rte_errno = errno;
+ DRV_LOG(ERR, "Failed to create FLEX PARSE GRAPH object "
+ "by using DevX.");
+ rte_free(parse_flex_obj);
+ return NULL;
+ }
+ parse_flex_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
+ return parse_flex_obj;
+}
+
/**
* Query HCA attributes.
* Using those attributes we can check on run time if the device
@@ -467,6 +626,9 @@ struct mlx5_devx_obj *
attr->vdpa.queue_counters_valid = !!(MLX5_GET64(cmd_hca_cap, hcattr,
general_obj_types) &
MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS);
+ attr->parse_graph_flex_node = !!(MLX5_GET64(cmd_hca_cap, hcattr,
+ general_obj_types) &
+ MLX5_GENERAL_OBJ_TYPES_CAP_PARSE_GRAPH_FLEX_NODE);
if (attr->qos.sup) {
MLX5_SET(query_hca_cap_in, in, op_mod,
MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP |
@@ -1024,7 +1186,7 @@ struct mlx5_devx_obj *
if (ret) {
DRV_LOG(ERR, "Failed to modify SQ using DevX");
rte_errno = errno;
- return -errno;
+ return -rte_errno;
}
return ret;
}
@@ -1337,7 +1499,7 @@ struct mlx5_devx_obj *
if (ret) {
DRV_LOG(ERR, "Failed to modify VIRTQ using DevX.");
rte_errno = errno;
- return -errno;
+ return -rte_errno;
}
return ret;
}
@@ -1540,7 +1702,7 @@ struct mlx5_devx_obj *
if (ret) {
DRV_LOG(ERR, "Failed to modify QP using DevX.");
rte_errno = errno;
- return -errno;
+ return -rte_errno;
}
return ret;
}
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index faabfb1..9a91649 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -68,6 +68,7 @@ struct mlx5_hca_attr {
uint32_t eswitch_manager:1;
uint32_t flow_counters_dump:1;
uint32_t log_max_rqt_size:5;
+ uint32_t parse_graph_flex_node:1;
uint8_t flow_counter_bulk_alloc_bitmap;
uint32_t eth_net_offloads:1;
uint32_t eth_virt:1;
@@ -416,6 +417,13 @@ int mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp,
__rte_internal
int mlx5_devx_cmd_modify_rqt(struct mlx5_devx_obj *rqt,
struct mlx5_devx_rqt_attr *rqt_attr);
+__rte_internal
+int mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
+ uint32_t ids[], uint32_t num);
+
+__rte_internal
+struct mlx5_devx_obj *mlx5_devx_cmd_create_flex_parser(void* ctx,
+ struct mlx5_devx_graph_node_attr *data);
/**
* Create virtio queue counters object DevX API.
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 9fed365..2b63d5a 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -961,10 +961,9 @@ enum {
MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1,
};
-enum {
- MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q = (1ULL << 0xd),
- MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS = (1ULL << 0x1c),
-};
+#define MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q (1ULL << 0xd)
+#define MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS (1ULL << 0x1c)
+#define MLX5_GENERAL_OBJ_TYPES_CAP_PARSE_GRAPH_FLEX_NODE (1ULL << 0x22)
enum {
MLX5_HCA_CAP_OPMOD_GET_MAX = 0,
@@ -2022,6 +2021,7 @@ struct mlx5_ifc_create_cq_in_bits {
enum {
MLX5_GENERAL_OBJ_TYPE_VIRTQ = 0x000d,
MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS = 0x001c,
+ MLX5_GENERAL_OBJ_TYPE_FLEX_PARSE_GRAPH = 0x0022,
};
struct mlx5_ifc_general_obj_in_cmd_hdr_bits {
@@ -2500,6 +2500,67 @@ struct mlx5_ifc_query_qp_in_bits {
u8 reserved_at_60[0x20];
};
+struct mlx5_ifc_parse_graph_arc_bits {
+ u8 start_inner_tunnel[0x1];
+ u8 reserved_at_1[0x7];
+ u8 arc_parse_graph_node[0x8];
+ u8 compare_condition_value[0x10];
+ u8 parse_graph_node_handle[0x20];
+ u8 reserved_at_40[0x40];
+};
+
+struct mlx5_ifc_parse_graph_flow_match_sample_bits {
+ u8 flow_match_sample_en[0x1];
+ u8 reserved_at_1[0x3];
+ u8 flow_match_sample_offset_mode[0x4];
+ u8 reserved_at_5[0x8];
+ u8 flow_match_sample_field_offset[0x10];
+ u8 reserved_at_32[0x4];
+ u8 flow_match_sample_field_offset_shift[0x4];
+ u8 flow_match_sample_field_base_offset[0x8];
+ u8 reserved_at_48[0xd];
+ u8 flow_match_sample_tunnel_mode[0x3];
+ u8 flow_match_sample_field_offset_mask[0x20];
+ u8 flow_match_sample_field_id[0x20];
+};
+
+struct mlx5_ifc_parse_graph_flex_bits {
+ u8 modify_field_select[0x40];
+ u8 reserved_at_64[0x20];
+ u8 header_length_base_value[0x10];
+ u8 reserved_at_112[0x4];
+ u8 header_length_field_shift[0x4];
+ u8 reserved_at_120[0x4];
+ u8 header_length_mode[0x4];
+ u8 header_length_field_offset[0x10];
+ u8 next_header_field_offset[0x10];
+ u8 reserved_at_160[0x1b];
+ u8 next_header_field_size[0x5];
+ u8 header_length_field_mask[0x20];
+ u8 reserved_at_224[0x20];
+ struct mlx5_ifc_parse_graph_flow_match_sample_bits sample_table[0x8];
+ struct mlx5_ifc_parse_graph_arc_bits input_arc[0x8];
+ struct mlx5_ifc_parse_graph_arc_bits output_arc[0x8];
+};
+
+struct mlx5_ifc_create_flex_parser_in_bits {
+ struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+ struct mlx5_ifc_parse_graph_flex_bits flex;
+};
+
+struct mlx5_ifc_create_flex_parser_out_bits {
+ struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+ struct mlx5_ifc_parse_graph_flex_bits flex;
+};
+
+struct mlx5_ifc_parse_graph_flex_out_bits {
+ u8 status[0x8];
+ u8 reserved_at_8[0x18];
+ u8 syndrome[0x20];
+ u8 reserved_at_40[0x40];
+ struct mlx5_ifc_parse_graph_flex_bits capability;
+};
+
/* CQE format mask. */
#define MLX5E_CQE_FORMAT_MASK 0xc
diff --git a/drivers/common/mlx5/rte_common_mlx5_version.map b/drivers/common/mlx5/rte_common_mlx5_version.map
index ae57ebd..c86497f 100644
--- a/drivers/common/mlx5/rte_common_mlx5_version.map
+++ b/drivers/common/mlx5/rte_common_mlx5_version.map
@@ -11,6 +11,7 @@ INTERNAL {
mlx5_dev_to_pci_addr;
mlx5_devx_cmd_create_cq;
+ mlx5_devx_cmd_create_flex_parser;
mlx5_devx_cmd_create_qp;
mlx5_devx_cmd_create_rq;
mlx5_devx_cmd_create_rqt;
@@ -32,6 +33,7 @@ INTERNAL {
mlx5_devx_cmd_modify_virtq;
mlx5_devx_cmd_qp_query_tis_td;
mlx5_devx_cmd_query_hca_attr;
+ mlx5_devx_cmd_query_parse_samples;
mlx5_devx_cmd_query_virtio_q_counters;
mlx5_devx_cmd_query_virtq;
mlx5_devx_get_out_command_status;
--
1.8.3.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH 5/5] net/mlx5: create and destroy eCPRI flex parser
2020-07-08 14:43 [dpdk-dev] [PATCH 0/5] add eCPRI support in mlx5 driver Bing Zhao
` (3 preceding siblings ...)
2020-07-08 14:43 ` [dpdk-dev] [PATCH 4/5] net/mlx5: adding Devx command for flex parsers Bing Zhao
@ 2020-07-08 14:43 ` Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 0/7] add eCPRI support in mlx5 driver Bing Zhao
5 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-08 14:43 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev
eCPRI protocol has unified format layout for the variants, over
ETH layer (including .1Q) and UDP layer.
The common header of the message has 4 bytes fixed length, and the
message payload layers are different based on the type field. Now
only type #0, #2 and #5 will be supported, and 2 bytes are needed.
When creating the flex parser, the header will be extended to 8
bytes and 2 DW samples are needed. The 1st DW starts from offset 0
and will be used for the type field of the common header. The 2nd
DW starts from offset 4 and will be used for the physical channel
ID, real-time control ID or measurement ID fields.
The parser will be created once a flow with eCPRI item is observed
for the first time. After creating, it will remain in the system
and HW until the device is stopped. Right now, there is no need to
destroy the eCPRI flex parser after the last flow with eCPRI item
is destroyed. This is to get rid of the alternate states of creating
and destroying eCPRI flex parser with a single eCPRI flow.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 76 +++++++++++++++++++++++++++++++++++++++++
drivers/net/mlx5/mlx5.h | 3 ++
drivers/net/mlx5/mlx5_flow_dv.c | 35 +++----------------
3 files changed, 83 insertions(+), 31 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 07c6add..4f6274d 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -597,6 +597,80 @@ struct mlx5_flow_id_pool *
mlx5_ipool_destroy(sh->ipool[i]);
}
+bool
+mlx5_flex_parser_ecpri_exist(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_flex_parser_profiles *prf =
+ &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
+
+ return !!prf->obj;
+}
+
+int
+mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_flex_parser_profiles *prf =
+ &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
+ struct mlx5_devx_graph_node_attr node = {
+ .modify_field_select = 0,
+ };
+ uint32_t ids[8];
+ int ret;
+
+ node.header_length_mode = MLX5_GRAPH_NODE_LEN_FIXED;
+ /* 8 bytes now: 4B common header + 4B message body header. */
+ node.header_length_base_value = 0x8;
+ /* After MAC layer: Ether / VLAN. */
+ node.in[0].arc_parse_graph_node = 0x2;
+ /* Type of compared condition should be 0xAEFE in the L2 layer. */
+ node.in[0].compare_condition_value = RTE_ETHER_TYPE_ECPRI;
+ /* Sample #0: type in common header. */
+ node.sample[0].flow_match_sample_en = 1;
+ /* Fixed offset. */
+ node.sample[0].flow_match_sample_offset_mode = 0x0;
+ /* Only the 2nd byte will be used. */
+ node.sample[0].flow_match_sample_field_base_offset = 0x0;
+ /* Sample #1: message payload. */
+ node.sample[1].flow_match_sample_en = 1;
+ /* Fixed offset. */
+ node.sample[1].flow_match_sample_offset_mode = 0x0;
+ /* Only the first two bytes will be used. */
+ node.sample[1].flow_match_sample_field_base_offset = 0x4;
+ prf->obj = mlx5_devx_cmd_create_flex_parser(priv->sh->ctx, &node);
+ if (!prf->obj) {
+ DRV_LOG(ERR, "Failed to create flex parser node object.");
+ return (rte_errno == 0) ? -ENODEV : -rte_errno;
+ }
+ prf->num = 2;
+ ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num);
+ if (ret) {
+ DRV_LOG(ERR, "Failed to query sample IDs.");
+ return (rte_errno == 0) ? -ENODEV : -rte_errno;
+ }
+ prf->offset[0] = 0x0;
+ prf->offset[1] = 0x4;
+ prf->ids[0] = ids[0];
+ prf->ids[1] = ids[1];
+ return 0;
+}
+
+static void
+mlx5_flex_parser_ecpri_release(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+
+ struct mlx5_flex_parser_profiles *prf =
+ &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
+
+ if (prf->obj)
+ mlx5_devx_cmd_destroy(prf->obj);
+ prf->obj = NULL;
+ return;
+}
+
+
/**
* Allocate shared device context. If there is multiport device the
* master and representors will share this context, if there is single
@@ -1158,6 +1232,8 @@ struct mlx5_dev_ctx_shared *
rte_wmb();
/* Disable datapath on secondary process. */
mlx5_mp_req_stop_rxtx(dev);
+ /* Free the eCPRI flex parser resource. */
+ mlx5_flex_parser_ecpri_release(dev);
if (priv->rxqs != NULL) {
/* XXX race condition if mlx5_rx_burst() is still running. */
usleep(1000);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 07cbaf4..42a8bd5 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -734,6 +734,8 @@ int mlx5_dev_check_sibling_config(struct mlx5_priv *priv,
int mlx5_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu);
int mlx5_hairpin_cap_get(struct rte_eth_dev *dev,
struct rte_eth_hairpin_cap *cap);
+bool mlx5_flex_parser_ecpri_exist(struct rte_eth_dev *dev);
+int mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev);
/* mlx5_ethdev.c */
@@ -957,4 +959,5 @@ int mlx5_os_read_dev_stat(struct mlx5_priv *priv,
void mlx5_os_stats_init(struct rte_eth_dev *dev);
void mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t *reg_mr_cb,
mlx5_dereg_mr_t *dereg_mr_cb);
+
#endif /* RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 523d778..987c635 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -4779,34 +4779,6 @@ struct field_modify_info modify_tcp[] = {
cnt, next);
}
-static inline bool
-flow_dv_flex_parser_ecpri_exist(struct rte_eth_dev *dev)
-{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_flex_parser_profiles *prf =
- &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
-
- return !!prf->obj;
-}
-
-
-static int
-flow_dv_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
-{
- struct mlx5_priv *priv = dev->data->dev_private;
- struct mlx5_flex_parser_profiles *prf =
- &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
-
- return 0;
-}
-
-static void
-flow_dv_flex_parser_ecpri_release(struct rte_eth_dev *dev)
-{
- (void)dev;
- return;
-}
-
/**
* Verify the @p attributes will be correctly understood by the NIC and store
* them in the @p flow if everything is correct.
@@ -8688,11 +8660,12 @@ struct field_modify_info modify_tcp[] = {
last_item = MLX5_FLOW_LAYER_GTP;
break;
case RTE_FLOW_ITEM_TYPE_ECPRI:
- if (!flow_dv_flex_parser_ecpri_exist(dev)) {
- ret = flow_dv_flex_parser_ecpri_alloc(dev);
+ if (!mlx5_flex_parser_ecpri_exist(dev)) {
+ /* Create it only the first time to be used. */
+ ret = mlx5_flex_parser_ecpri_alloc(dev);
if (ret)
return rte_flow_error_set
- (error, ret,
+ (error, -ret,
RTE_FLOW_ERROR_TYPE_ITEM,
NULL,
"cannot create eCPRI parser");
--
1.8.3.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH 2/5] net/mlx5: add flow translation of eCPRI header
2020-07-08 14:43 ` [dpdk-dev] [PATCH 2/5] net/mlx5: add flow translation " Bing Zhao
@ 2020-07-09 12:22 ` Thomas Monjalon
2020-07-09 14:47 ` Bing Zhao
0 siblings, 1 reply; 40+ messages in thread
From: Thomas Monjalon @ 2020-07-09 12:22 UTC (permalink / raw)
To: Bing Zhao; +Cc: orika, viacheslavo, rasland, matan, dev
08/07/2020 16:43, Bing Zhao:
> In the translation stage, the eCPRI item should be translated into
> the format that lower layer driver could use. All the fields that
> need to matched must be in network byte order after translation, as
> well as the mask. Since the header in the item belongs to the network
> layers stack, and the input parameter of the header is considered to
> be in big-endian format already.
>
> Base on the definition in the PRM, the DW samples will be used for
> matching in the FTE/STE. Now, the type field and only the PC ID, RTC
> ID, and DLY MSR ID of the payload will be supported. The masks should
> be 00 ff 00 00 ff ff(00) 00 00 in the network order. Two DWs are
> needed to support such matching. The mask fields could be zeros to
> support some wildcard rules. But it makes no sense to support the
> rule matching only on the payload but without matching type filed.
>
> The DW samples should be stored after the flex parser creation for
> eCPRI. There is no need to query the sample IDs each time when
> creating a flow rule with eCPRI item. It will not introduce
> insertion rate degradation significantly.
>
> Signed-off-by: Bing Zhao <bingz@mellanox.com>
> ---
> drivers/common/mlx5/mlx5_prm.h | 16 ++++-
> drivers/net/mlx5/mlx5.h | 15 +++++
> drivers/net/mlx5/mlx5_flow_dv.c | 130 ++++++++++++++++++++++++++++++++++++++++
> 3 files changed, 160 insertions(+), 1 deletion(-)
In this patch, you could add the feature in the release notes,
as part of mlx5 section, and probably in the mlx5 guide too.
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH 2/5] net/mlx5: add flow translation of eCPRI header
2020-07-09 12:22 ` Thomas Monjalon
@ 2020-07-09 14:47 ` Bing Zhao
0 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-09 14:47 UTC (permalink / raw)
To: Thomas Monjalon
Cc: Ori Kam, Slava Ovsiienko, Raslan Darawsheh, Matan Azrad, dev
Got it, I will add it.
Thanks for your comments.
BR. Bing
> -----Original Message-----
> From: Thomas Monjalon <thomas@monjalon.net>
> Sent: Thursday, July 9, 2020 8:22 PM
> To: Bing Zhao <bingz@mellanox.com>
> Cc: Ori Kam <orika@mellanox.com>; Slava Ovsiienko
> <viacheslavo@mellanox.com>; Raslan Darawsheh
> <rasland@mellanox.com>; Matan Azrad <matan@mellanox.com>;
> dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH 2/5] net/mlx5: add flow translation of
> eCPRI header
>
> 08/07/2020 16:43, Bing Zhao:
> > In the translation stage, the eCPRI item should be translated into the
> > format that lower layer driver could use. All the fields that need to
> > matched must be in network byte order after translation, as well as
> > the mask. Since the header in the item belongs to the network layers
> > stack, and the input parameter of the header is considered to be in
> > big-endian format already.
> >
> > Base on the definition in the PRM, the DW samples will be used for
> > matching in the FTE/STE. Now, the type field and only the PC ID, RTC
> > ID, and DLY MSR ID of the payload will be supported. The masks
> should
> > be 00 ff 00 00 ff ff(00) 00 00 in the network order. Two DWs are
> > needed to support such matching. The mask fields could be zeros to
> > support some wildcard rules. But it makes no sense to support the
> rule
> > matching only on the payload but without matching type filed.
> >
> > The DW samples should be stored after the flex parser creation for
> > eCPRI. There is no need to query the sample IDs each time when
> > creating a flow rule with eCPRI item. It will not introduce insertion
> > rate degradation significantly.
> >
> > Signed-off-by: Bing Zhao <bingz@mellanox.com>
> > ---
> > drivers/common/mlx5/mlx5_prm.h | 16 ++++-
> > drivers/net/mlx5/mlx5.h | 15 +++++
> > drivers/net/mlx5/mlx5_flow_dv.c | 130
> > ++++++++++++++++++++++++++++++++++++++++
> > 3 files changed, 160 insertions(+), 1 deletion(-)
>
> In this patch, you could add the feature in the release notes, as part of
> mlx5 section, and probably in the mlx5 guide too.
>
>
>
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v2 0/7] add eCPRI support in mlx5 driver
2020-07-08 14:43 [dpdk-dev] [PATCH 0/5] add eCPRI support in mlx5 driver Bing Zhao
` (4 preceding siblings ...)
2020-07-08 14:43 ` [dpdk-dev] [PATCH 5/5] net/mlx5: create and destroy eCPRI flex parser Bing Zhao
@ 2020-07-16 13:49 ` Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 1/7] net/mlx5: add flow validation of eCPRI header Bing Zhao
` (7 more replies)
5 siblings, 8 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-16 13:49 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
This patch set is to add the eCPRI support of flow rules in mlx5 PMD
driver. Right now, only eCPRI over Ethernet layer (including VLAN)
is supported. eCPRI over UDP will be supported in the future. If the
flow rule to be inserted is not supported, PMD driver will return
error to indicate the reason of the failure.
v2:
1. added document updates
2. add NIC / FW capacity check
3. fix mask of type in common header check and code cleanup
Bing Zhao (7):
net/mlx5: add flow validation of eCPRI header
net/mlx5: add flow translation of eCPRI header
common/mlx5: add flex parser DevX structures
common/mlx5: adding DevX command for flex parsers
net/mlx5: create and destroy eCPRI flex parser
net/mlx5: add eCPRI flex parser capacity check
doc: update release notes and guides for eCPRI
doc/guides/nics/mlx5.rst | 5 +
doc/guides/rel_notes/release_20_08.rst | 1 +
drivers/common/mlx5/mlx5_devx_cmds.c | 170 +++++++++++++++++++++++-
drivers/common/mlx5/mlx5_devx_cmds.h | 52 ++++++++
drivers/common/mlx5/mlx5_prm.h | 115 +++++++++++++++-
drivers/common/mlx5/rte_common_mlx5_version.map | 2 +
drivers/net/mlx5/mlx5.c | 107 +++++++++++++++
drivers/net/mlx5/mlx5.h | 19 +++
drivers/net/mlx5/mlx5_flow.c | 107 ++++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 9 ++
drivers/net/mlx5/mlx5_flow_dv.c | 126 ++++++++++++++++++
11 files changed, 704 insertions(+), 9 deletions(-)
--
2.5.5
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v2 1/7] net/mlx5: add flow validation of eCPRI header
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 0/7] add eCPRI support in mlx5 driver Bing Zhao
@ 2020-07-16 13:49 ` Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 2/7] net/mlx5: add flow translation " Bing Zhao
` (6 subsequent siblings)
7 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-16 13:49 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
When creating a flow with eCPRI header item, the validation of it is
mandatory. The detailed limitations are listed below:
1. Over Ether / VLAN, ethertype must be 0xAEFE.
2. No tunnel support is described in the specification now.
3. L3 layer is only supported when L4 is UDP, see #4.
4. Over TCP is not supported from the specification, and over UDP
is not supported right now.
5. Concatenation indicator matching is not supported now.
6. No need to check the revision.
7. Only type field in the common header is mandatory, and one byte
should be matched integrally.
8. Fields in the message payload header are optional.
9. Only messages with type #0, #2 and #5 are supported now.
Some limitations are only from software right now, because there is
no need to support all the message types and variants of protocol
stack listed in the specification.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 107 +++++++++++++++++++++++++++++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 9 ++++
drivers/net/mlx5/mlx5_flow_dv.c | 23 +++++++++
3 files changed, 138 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index ae5ccc2..12d80b5 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1227,11 +1227,17 @@ mlx5_flow_validate_action_rss(const struct rte_flow_action *action,
RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
"rss action not supported for "
"egress");
- if (rss->level > 1 && !tunnel)
+ if (rss->level > 1 && !tunnel)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
"inner RSS is not supported for "
"non-tunnel flows");
+ if ((item_flags & MLX5_FLOW_LAYER_ECPRI) &&
+ !(item_flags & MLX5_FLOW_LAYER_INNER_L4_UDP)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
+ "RSS on eCPRI is not supported now");
+ }
return 0;
}
@@ -1597,6 +1603,10 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
* Item specification.
* @param[in] item_flags
* Bit-fields that holds the items detected until now.
+ * @param[in] last_item
+ * Previous validated item in the pattern items.
+ * @param[in] ether_type
+ * Type in the ethernet layer header (including dot1q).
* @param[in] acc_mask
* Acceptable mask, if NULL default internal default mask
* will be used to check whether item fields are supported.
@@ -1695,6 +1705,10 @@ mlx5_flow_validate_item_ipv4(const struct rte_flow_item *item,
* Item specification.
* @param[in] item_flags
* Bit-fields that holds the items detected until now.
+ * @param[in] last_item
+ * Previous validated item in the pattern items.
+ * @param[in] ether_type
+ * Type in the ethernet layer header (including dot1q).
* @param[in] acc_mask
* Acceptable mask, if NULL default internal default mask
* will be used to check whether item fields are supported.
@@ -2357,6 +2371,97 @@ mlx5_flow_validate_item_nvgre(const struct rte_flow_item *item,
return 0;
}
+/**
+ * Validate eCPRI item.
+ *
+ * @param[in] item
+ * Item specification.
+ * @param[in] item_flags
+ * Bit-fields that holds the items detected until now.
+ * @param[in] last_item
+ * Previous validated item in the pattern items.
+ * @param[in] ether_type
+ * Type in the ethernet layer header (including dot1q).
+ * @param[in] acc_mask
+ * Acceptable mask, if NULL default internal default mask
+ * will be used to check whether item fields are supported.
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_flow_validate_item_ecpri(const struct rte_flow_item *item,
+ uint64_t item_flags,
+ uint64_t last_item,
+ uint16_t ether_type,
+ const struct rte_flow_item_ecpri *acc_mask,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_ecpri *mask = item->mask;
+ const struct rte_flow_item_ecpri nic_mask = {
+ .hdr = {
+ .common = {
+ .u32 =
+ RTE_BE32(((const struct rte_ecpri_common_hdr) {
+ .type = 0xFF,
+ }).u32),
+ },
+ .dummy[0] = 0xFFFFFFFF,
+ },
+ };
+ const uint64_t outer_l2_vlan = (MLX5_FLOW_LAYER_OUTER_L2 |
+ MLX5_FLOW_LAYER_OUTER_VLAN);
+ struct rte_flow_item_ecpri mask_lo;
+
+ if ((last_item & outer_l2_vlan) && ether_type &&
+ ether_type != RTE_ETHER_TYPE_ECPRI)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "eCPRI cannot follow L2/VLAN layer "
+ "which ether type is not 0xAEFE.");
+ if (item_flags & MLX5_FLOW_LAYER_TUNNEL)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "eCPRI with tunnel is not supported "
+ "right now.");
+ if (item_flags & MLX5_FLOW_LAYER_OUTER_L3)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "multiple L3 layers not supported");
+ else if (item_flags & MLX5_FLOW_LAYER_OUTER_L4_TCP)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "eCPRI cannot follow a TCP layer.");
+ /* In specification, eCPRI could be over UDP layer. */
+ else if (item_flags & MLX5_FLOW_LAYER_OUTER_L4_UDP)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "eCPRI over UDP layer is not yet "
+ "supported right now.");
+ /* Mask for type field in common header could be zero. */
+ if (!mask)
+ mask = &rte_flow_item_ecpri_mask;
+ mask_lo.hdr.common.u32 = rte_be_to_cpu_32(mask->hdr.common.u32);
+ /* Input mask is in big-endian format. */
+ if (mask_lo.hdr.common.type != 0 && mask_lo.hdr.common.type != 0xff)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
+ "partial mask is not supported "
+ "for protocol");
+ else if (mask_lo.hdr.common.type == 0 && mask->hdr.dummy[0] != 0)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
+ "message header mask must be after "
+ "a type mask");
+ return mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
+ acc_mask ? (const uint8_t *)acc_mask
+ : (const uint8_t *)&nic_mask,
+ sizeof(struct rte_flow_item_ecpri),
+ error);
+}
+
/* Allocate unique ID for the split Q/RSS subflows. */
static uint32_t
flow_qrss_get_id(struct rte_eth_dev *dev)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 43cbda8..6dfeef3 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -128,6 +128,9 @@ enum mlx5_feature_name {
/* Pattern tunnel Layer bits (continued). */
#define MLX5_FLOW_LAYER_GTP (1u << 28)
+/* Pattern eCPRI Layer bit. */
+#define MLX5_FLOW_LAYER_ECPRI (UINT64_C(1) << 29)
+
/* Outer Masks. */
#define MLX5_FLOW_LAYER_OUTER_L3 \
(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6)
@@ -1027,6 +1030,12 @@ int mlx5_flow_validate_item_geneve(const struct rte_flow_item *item,
uint64_t item_flags,
struct rte_eth_dev *dev,
struct rte_flow_error *error);
+int mlx5_flow_validate_item_ecpri(const struct rte_flow_item *item,
+ uint64_t item_flags,
+ uint64_t last_item,
+ uint16_t ether_type,
+ const struct rte_flow_item_ecpri *acc_mask,
+ struct rte_flow_error *error);
struct mlx5_meter_domains_infos *mlx5_flow_create_mtr_tbls
(struct rte_eth_dev *dev,
const struct mlx5_flow_meter *fm);
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 8b5b683..f042a42 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -4923,6 +4923,17 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
.hop_limits = 0xff,
},
};
+ const struct rte_flow_item_ecpri nic_ecpri_mask = {
+ .hdr = {
+ .common = {
+ .u32 =
+ RTE_BE32(((const struct rte_ecpri_common_hdr) {
+ .type = 0xFF,
+ }).u32),
+ },
+ .dummy[0] = 0xffffffff,
+ },
+ };
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_dev_config *dev_conf = &priv->config;
uint16_t queue_index = 0xFFFF;
@@ -5173,6 +5184,17 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
return ret;
last_item = MLX5_FLOW_LAYER_GTP;
break;
+ case RTE_FLOW_ITEM_TYPE_ECPRI:
+ /* Capacity will be checked in the translate stage. */
+ ret = mlx5_flow_validate_item_ecpri(items, item_flags,
+ last_item,
+ ether_type,
+ &nic_ecpri_mask,
+ error);
+ if (ret < 0)
+ return ret;
+ last_item = MLX5_FLOW_LAYER_ECPRI;
+ break;
default:
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ITEM,
@@ -5882,6 +5904,7 @@ flow_dv_translate_item_eth(void *matcher, void *key,
* Set match on ethertype only if ETH header is not followed by VLAN.
* HW is optimized for IPv4/IPv6. In such cases, avoid setting
* ethertype, and use ip_version field instead.
+ * eCPRI over Ether layer will use type value 0xAEFE.
*/
if (eth_v->type == RTE_BE16(RTE_ETHER_TYPE_IPV4) &&
eth_m->type == 0xFFFF) {
--
2.5.5
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v2 2/7] net/mlx5: add flow translation of eCPRI header
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 0/7] add eCPRI support in mlx5 driver Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 1/7] net/mlx5: add flow validation of eCPRI header Bing Zhao
@ 2020-07-16 13:49 ` Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 3/7] common/mlx5: add flex parser DevX structures Bing Zhao
` (5 subsequent siblings)
7 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-16 13:49 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
In the translation stage, the eCPRI item should be translated into
the format that lower layer driver could use. All the fields that
need to match must be in network byte order after translation, as
well as the mask. Since the header in the item belongs to the network
layers stack, and the input parameter of the header is considered to
be in big-endian format already.
Base on the definition in the PRM, the DW samples will be used for
matching in the FTE/STE. Now, the type field and only the PC ID, RTC
ID, and DLY MSR ID of the payload will be supported. The masks should
be 00 ff 00 00 ff ff(00) 00 00 in the network order. Two DWs are
needed to support such matching. The mask fields could be zeros to
support some wildcard rules. But it makes no sense to support the
rule matching only on the payload but without matching type field.
The DW samples should be stored after the flex parser creation for
eCPRI. There is no need to query the sample IDs each time when
creating a flow rule with eCPRI item. It will not introduce
insertion rate degradation significantly.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
v2: fix the endianess issue of type mask field checking.
---
drivers/common/mlx5/mlx5_prm.h | 16 ++++++-
drivers/net/mlx5/mlx5.c | 53 +++++++++++++++++++++
drivers/net/mlx5/mlx5.h | 18 +++++++
drivers/net/mlx5/mlx5_flow_dv.c | 102 ++++++++++++++++++++++++++++++++++++++++
4 files changed, 188 insertions(+), 1 deletion(-)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index c63795f..e6278c0 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -709,6 +709,18 @@ struct mlx5_ifc_fte_match_set_misc3_bits {
u8 reserved_at_170[0x90];
};
+struct mlx5_ifc_fte_match_set_misc4_bits {
+ u8 prog_sample_field_value_0[0x20];
+ u8 prog_sample_field_id_0[0x20];
+ u8 prog_sample_field_value_1[0x20];
+ u8 prog_sample_field_id_1[0x20];
+ u8 prog_sample_field_value_2[0x20];
+ u8 prog_sample_field_id_2[0x20];
+ u8 prog_sample_field_value_3[0x20];
+ u8 prog_sample_field_id_3[0x20];
+ u8 reserved_at_100[0x100];
+};
+
/* Flow matcher. */
struct mlx5_ifc_fte_match_param_bits {
struct mlx5_ifc_fte_match_set_lyr_2_4_bits outer_headers;
@@ -716,6 +728,7 @@ struct mlx5_ifc_fte_match_param_bits {
struct mlx5_ifc_fte_match_set_lyr_2_4_bits inner_headers;
struct mlx5_ifc_fte_match_set_misc2_bits misc_parameters_2;
struct mlx5_ifc_fte_match_set_misc3_bits misc_parameters_3;
+ struct mlx5_ifc_fte_match_set_misc4_bits misc_parameters_4;
};
enum {
@@ -723,7 +736,8 @@ enum {
MLX5_MATCH_CRITERIA_ENABLE_MISC_BIT,
MLX5_MATCH_CRITERIA_ENABLE_INNER_BIT,
MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT,
- MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT
+ MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT,
+ MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT,
};
enum {
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 0c654ed..daa9467 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -595,6 +595,59 @@ mlx5_flow_ipool_destroy(struct mlx5_dev_ctx_shared *sh)
mlx5_ipool_destroy(sh->ipool[i]);
}
+/*
+ * Check if dynamic flex parser for eCPRI already exists.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ *
+ * @return
+ * true on exists, false on not.
+ */
+bool
+mlx5_flex_parser_ecpri_exist(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_flex_parser_profiles *prf =
+ &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
+
+ return !!prf->obj;
+}
+
+/*
+ * Allocation of a flex parser for eCPRI. Once created, this parser related
+ * resources will be held until the device is closed.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_flex_parser_profiles *prf =
+ &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
+
+ (void)prf;
+ return 0;
+}
+
+/*
+ * Destroy the flex parser node, including the parser itself, input / output
+ * arcs and DW samples. Resources could be reused then.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ */
+static void
+flow_dv_flex_parser_ecpri_release(struct rte_eth_dev *dev)
+{
+ (void)dev;
+}
+
/**
* Allocate shared device context. If there is multiport device the
* master and representors will share this context, if there is single
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 46e66eb..b79675d 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -529,6 +529,20 @@ struct mlx5_flow_id_pool {
uint32_t max_id; /**< Maximum id can be allocated from the pool. */
};
+/* Supported flex parser profile ID. */
+enum mlx5_flex_parser_profile_id {
+ MLX5_FLEX_PARSER_ECPRI_0 = 0,
+ MLX5_FLEX_PARSER_MAX = 8,
+};
+
+/* Sample ID information of flex parser structure. */
+struct mlx5_flex_parser_profiles {
+ uint32_t num; /* Actual number of samples. */
+ uint32_t ids[8]; /* Sample IDs for this profile. */
+ uint8_t offset[8]; /* Bytes offset of each parser. */
+ void *obj; /* Flex parser node object. */
+};
+
/*
* Shared Infiniband device context for Master/Representors
* which belong to same IB device with multiple IB ports.
@@ -579,6 +593,8 @@ struct mlx5_dev_ctx_shared {
struct mlx5_devx_obj *tis; /* TIS object. */
struct mlx5_devx_obj *td; /* Transport domain. */
struct mlx5_flow_id_pool *flow_id_pool; /* Flow ID pool. */
+ struct mlx5_flex_parser_profiles fp[MLX5_FLEX_PARSER_MAX];
+ /* Flex parser profiles information. */
struct mlx5_dev_shared_port port[]; /* per device port data array. */
};
@@ -718,6 +734,8 @@ int mlx5_fw_version_get(struct rte_eth_dev *dev, char *fw_ver, size_t fw_size);
int mlx5_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu);
int mlx5_hairpin_cap_get(struct rte_eth_dev *dev,
struct rte_eth_hairpin_cap *cap);
+bool mlx5_flex_parser_ecpri_exist(struct rte_eth_dev *dev);
+int mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev);
/* mlx5_ethdev.c */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index f042a42..cd2b0f0 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -7259,6 +7259,90 @@ flow_dv_translate_item_gtp(void *matcher, void *key,
rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid));
}
+/**
+ * Add eCPRI item to matcher and to the value.
+ *
+ * @param[in] dev
+ * The devich to configure through.
+ * @param[in, out] matcher
+ * Flow matcher.
+ * @param[in, out] key
+ * Flow matcher value.
+ * @param[in] item
+ * Flow pattern to translate.
+ * @param[in] samples
+ * Sample IDs to be used in the matching.
+ */
+static void
+flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher,
+ void *key, const struct rte_flow_item *item)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ const struct rte_flow_item_ecpri *ecpri_m = item->mask;
+ const struct rte_flow_item_ecpri *ecpri_v = item->spec;
+ void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher,
+ misc_parameters_4);
+ void *misc4_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_4);
+ uint32_t *samples;
+ void *dw_m;
+ void *dw_v;
+
+ if (!ecpri_v)
+ return;
+ if (!ecpri_m)
+ ecpri_m = &rte_flow_item_ecpri_mask;
+ /*
+ * Maximal four DW samples are supported in a single matching now.
+ * Two are used now for a eCPRI matching:
+ * 1. Type: one byte, mask should be 0x00ff0000 in network order
+ * 2. ID of a message: one or two bytes, mask 0xffff0000 or 0xff000000
+ * if any.
+ */
+ if (!ecpri_m->hdr.common.u32)
+ return;
+ samples = priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0].ids;
+ /* Need to take the whole DW as the mask to fill the entry. */
+ dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m,
+ prog_sample_field_value_0);
+ dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v,
+ prog_sample_field_value_0);
+ /* Already big endian (network order) in the header. */
+ *(uint32_t *)dw_m = ecpri_m->hdr.common.u32;
+ *(uint32_t *)dw_v = ecpri_v->hdr.common.u32;
+ /* Sample#0, used for matching type, offset 0. */
+ MLX5_SET(fte_match_set_misc4, misc4_m,
+ prog_sample_field_id_0, samples[0]);
+ /* It makes no sense to set the sample ID in the mask field. */
+ MLX5_SET(fte_match_set_misc4, misc4_v,
+ prog_sample_field_id_0, samples[0]);
+ /*
+ * Checking if message body part needs to be matched.
+ * Some wildcard rules only matching type field should be supported.
+ */
+ if (ecpri_m->hdr.dummy[0]) {
+ switch (ecpri_v->hdr.common.type) {
+ case RTE_ECPRI_MSG_TYPE_IQ_DATA:
+ case RTE_ECPRI_MSG_TYPE_RTC_CTRL:
+ case RTE_ECPRI_MSG_TYPE_DLY_MSR:
+ dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m,
+ prog_sample_field_value_1);
+ dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v,
+ prog_sample_field_value_1);
+ *(uint32_t *)dw_m = ecpri_m->hdr.dummy[0];
+ *(uint32_t *)dw_v = ecpri_v->hdr.dummy[0];
+ /* Sample#1, to match message body, offset 4. */
+ MLX5_SET(fte_match_set_misc4, misc4_m,
+ prog_sample_field_id_1, samples[1]);
+ MLX5_SET(fte_match_set_misc4, misc4_v,
+ prog_sample_field_id_1, samples[1]);
+ break;
+ default:
+ /* Others, do not match any sample ID. */
+ break;
+ }
+ }
+}
+
static uint32_t matcher_zero[MLX5_ST_SZ_DW(fte_match_param)] = { 0 };
#define HEADER_IS_ZERO(match_criteria, headers) \
@@ -7294,6 +7378,9 @@ flow_dv_matcher_enable(uint32_t *match_criteria)
match_criteria_enable |=
(!HEADER_IS_ZERO(match_criteria, misc_parameters_3)) <<
MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT;
+ match_criteria_enable |=
+ (!HEADER_IS_ZERO(match_criteria, misc_parameters_4)) <<
+ MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT;
return match_criteria_enable;
}
@@ -8573,6 +8660,21 @@ __flow_dv_translate(struct rte_eth_dev *dev,
MLX5_PRIORITY_MAP_L2 : MLX5_PRIORITY_MAP_L4;
last_item = MLX5_FLOW_LAYER_GTP;
break;
+ case RTE_FLOW_ITEM_TYPE_ECPRI:
+ if (!mlx5_flex_parser_ecpri_exist(dev)) {
+ ret = mlx5_flex_parser_ecpri_alloc(dev);
+ if (ret)
+ return rte_flow_error_set
+ (error, ret,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ NULL,
+ "cannot create eCPRI parser");
+ }
+ flow_dv_translate_item_ecpri(dev, match_mask,
+ match_value, items);
+ /* No other protocol should follow eCPRI layer. */
+ last_item = MLX5_FLOW_LAYER_ECPRI;
+ break;
default:
break;
}
--
2.5.5
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v2 3/7] common/mlx5: add flex parser DevX structures
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 0/7] add eCPRI support in mlx5 driver Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 1/7] net/mlx5: add flow validation of eCPRI header Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 2/7] net/mlx5: add flow translation " Bing Zhao
@ 2020-07-16 13:49 ` Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 4/7] common/mlx5: adding DevX command for flex parsers Bing Zhao
` (4 subsequent siblings)
7 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-16 13:49 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
The structures and other definitions will be used for the dynamic
flex parser creation via Devx command interface. These structures
will be used as some some intermediate variables and input
parameters for the parser creation API.
It is better to keep all members consistent with the PRM definition
even though some of them will not be used.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
drivers/common/mlx5/mlx5_devx_cmds.h | 44 ++++++++++++++++++++++++++++++++++++
drivers/common/mlx5/mlx5_prm.h | 30 ++++++++++++++++++++++++
drivers/net/mlx5/mlx5.h | 2 +-
3 files changed, 75 insertions(+), 1 deletion(-)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index 25704ef..faabfb1 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -299,6 +299,50 @@ struct mlx5_devx_virtio_q_couners_attr {
uint32_t invalid_buffer;
};
+/*
+ * graph flow match sample attributes structure,
+ * used by flex parser operations.
+ */
+struct mlx5_devx_match_sample_attr {
+ uint32_t flow_match_sample_en:1;
+ uint32_t flow_match_sample_field_offset:16;
+ uint32_t flow_match_sample_offset_mode:4;
+ uint32_t flow_match_sample_field_offset_mask;
+ uint32_t flow_match_sample_field_offset_shift:4;
+ uint32_t flow_match_sample_field_base_offset:8;
+ uint32_t flow_match_sample_tunnel_mode:3;
+ uint32_t flow_match_sample_field_id;
+};
+
+/* graph node arc attributes structure, used by flex parser operations. */
+struct mlx5_devx_graph_arc_attr {
+ uint32_t compare_condition_value:16;
+ uint32_t start_inner_tunnel:1;
+ uint32_t arc_parse_graph_node:8;
+ uint32_t parse_graph_node_handle;
+};
+
+/* Maximal number of samples per graph node. */
+#define MLX5_GRAPH_NODE_SAMPLE_NUM 8
+
+/* Maximal number of input/output arcs per graph node. */
+#define MLX5_GRAPH_NODE_ARC_NUM 8
+
+/* parse graph node attributes structure, used by flex parser operations. */
+struct mlx5_devx_graph_node_attr {
+ uint32_t modify_field_select;
+ uint32_t header_length_mode:4;
+ uint32_t header_length_base_value:16;
+ uint32_t header_length_field_shift:4;
+ uint32_t header_length_field_offset:16;
+ uint32_t header_length_field_mask;
+ struct mlx5_devx_match_sample_attr sample[MLX5_GRAPH_NODE_SAMPLE_NUM];
+ uint32_t next_header_field_offset:16;
+ uint32_t next_header_field_size:5;
+ struct mlx5_devx_graph_arc_attr in[MLX5_GRAPH_NODE_ARC_NUM];
+ struct mlx5_devx_graph_arc_attr out[MLX5_GRAPH_NODE_ARC_NUM];
+};
+
/* mlx5_devx_cmds.c */
__rte_internal
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index e6278c0..e2a8e94 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -2539,6 +2539,36 @@ enum {
/* The bits meter color use. */
#define MLX5_MTR_COLOR_BITS 8
+/* Length mode of dynamic flex parser graph node. */
+enum mlx5_parse_graph_node_len_mode {
+ MLX5_GRAPH_NODE_LEN_FIXED = 0x0,
+ MLX5_GRAPH_NODE_LEN_FIELD = 0x1,
+ MLX5_GRAPH_NODE_LEN_BITMASK = 0x2,
+};
+
+/* Offset mode of the samples of flex parser. */
+enum mlx5_parse_graph_flow_match_sample_offset_mode {
+ MLX5_GRAPH_SAMPLE_OFFSET_FIXED = 0x0,
+ MLX5_GRAPH_SAMPLE_OFFSET_FIELD = 0x1,
+ MLX5_GRAPH_SAMPLE_OFFSET_BITMASK = 0x2,
+};
+
+/* Node index for an input / output arc of the flex parser graph. */
+enum mlx5_parse_graph_arc_node_index {
+ MLX5_GRAPH_ARC_NODE_NULL = 0x0,
+ MLX5_GRAPH_ARC_NODE_HEAD = 0x1,
+ MLX5_GRAPH_ARC_NODE_MAC = 0x2,
+ MLX5_GRAPH_ARC_NODE_IP = 0x3,
+ MLX5_GRAPH_ARC_NODE_GRE = 0x4,
+ MLX5_GRAPH_ARC_NODE_UDP = 0x5,
+ MLX5_GRAPH_ARC_NODE_MPLS = 0x6,
+ MLX5_GRAPH_ARC_NODE_TCP = 0x7,
+ MLX5_GRAPH_ARC_NODE_VXLAN_GPE = 0x8,
+ MLX5_GRAPH_ARC_NODE_GENEVE = 0x9,
+ MLX5_GRAPH_ARC_NODE_IPSEC_ESP = 0xa,
+ MLX5_GRAPH_ARC_NODE_PROGRAMMABLE = 0x1f,
+};
+
/**
* Convert a user mark to flow mark.
*
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index b79675d..c1b30be 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -539,7 +539,7 @@ enum mlx5_flex_parser_profile_id {
struct mlx5_flex_parser_profiles {
uint32_t num; /* Actual number of samples. */
uint32_t ids[8]; /* Sample IDs for this profile. */
- uint8_t offset[8]; /* Bytes offset of each parser. */
+ uint8_t offset[8]; /* Bytes offset of each parser. */
void *obj; /* Flex parser node object. */
};
--
2.5.5
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v2 4/7] common/mlx5: adding DevX command for flex parsers
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 0/7] add eCPRI support in mlx5 driver Bing Zhao
` (2 preceding siblings ...)
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 3/7] common/mlx5: add flex parser DevX structures Bing Zhao
@ 2020-07-16 13:49 ` Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 5/7] net/mlx5: create and destroy eCPRI flex parser Bing Zhao
` (3 subsequent siblings)
7 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-16 13:49 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
In order to use dynamic flex parser to parse protocols that is not
supported natively, two steps are needed.
Firstly, creating the parse graph node. There are three parts of the
flex parser: node, arc and sample. Node is the whole structure of a
flex parser, when creating, the length of the protocol should be
specified. Then the input arc(s) is(are) mandatory, it will tell the
HW when to use this parser to parse the packet. For a single parser
node, up to 8 input arcs could be supported and it gives SW ability
to support this protocol over multiple layers. The output arc is
optional and also up to 8 arcs could be supported. If the protocol
is the last header of the stack, then output arc should be NULL. Or
else it should be specified. The protocol type in the arc is used to
indicate the parser pointing to or from this flex parser node. For
output arc, the next header type field offset and size should be set
in the node structure, then the HW could get the proper type of the
next header and decide which parser to point to.
Note: the parsers have two types now, native parser and flex parser.
The arc between two flex parsers are not supported in this stage.
Secondly, querying the sample IDs. If the protocol header parsed
with flex parser needs to used in flow rule offloading, the DW
samples are needed when creating the parse graph node. The offset
of bytes starting from the header needs to be set. After creating
the node successfully, a general object handle will be returned.
This object could be queryed with Devx command to get the sample
IDs.
When creating a flow, sample IDs could be used to sample a DW from
the parsed header - 4 continuous bytes starting from the offset. The
flow entry could specify some mask to use part of this DW for
matching. Up to 8 samples could be supported for a single parse
graph node. The offset should not exceed the header length.
The HW resources have some limitation, low layer driver error should
be checked once there is a failure of creating parse graph node.
Signed-off-by: Netanel Gonen <netanelg@mellanox.com>
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
drivers/common/mlx5/mlx5_devx_cmds.c | 170 +++++++++++++++++++++++-
drivers/common/mlx5/mlx5_devx_cmds.h | 8 ++
drivers/common/mlx5/mlx5_prm.h | 69 +++++++++-
drivers/common/mlx5/rte_common_mlx5_version.map | 2 +
4 files changed, 242 insertions(+), 7 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 2179a83..38afc0d 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -396,6 +396,167 @@ mlx5_devx_cmd_query_hca_vdpa_attr(void *ctx,
}
}
+int
+mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
+ uint32_t ids[], uint32_t num)
+{
+ uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0};
+ uint32_t out[MLX5_ST_SZ_DW(create_flex_parser_out)] = {0};
+ void *hdr = MLX5_ADDR_OF(create_flex_parser_out, in, hdr);
+ void *flex = MLX5_ADDR_OF(create_flex_parser_out, out, flex);
+ void *sample = MLX5_ADDR_OF(parse_graph_flex, flex, sample_table);
+ int ret;
+ uint32_t idx = 0;
+ uint32_t i;
+
+ if (num > MLX5_GRAPH_NODE_SAMPLE_NUM) {
+ rte_errno = EINVAL;
+ DRV_LOG(ERR, "Too many sample IDs to be fetched.");
+ return -rte_errno;
+ }
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+ MLX5_CMD_OP_QUERY_GENERAL_OBJECT);
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+ MLX5_GENERAL_OBJ_TYPE_FLEX_PARSE_GRAPH);
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_id, flex_obj->id);
+ ret = mlx5_glue->devx_obj_query(flex_obj->obj, in, sizeof(in),
+ out, sizeof(out));
+ if (ret) {
+ rte_errno = ret;
+ DRV_LOG(ERR, "Failed to query sample IDs with object %p.",
+ (void *)flex_obj);
+ return -rte_errno;
+ }
+ for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
+ void *s_off = (void *)((char *)sample + i *
+ MLX5_ST_SZ_BYTES(parse_graph_flow_match_sample));
+ uint32_t en;
+
+ en = MLX5_GET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_en);
+ if (!en)
+ continue;
+ ids[idx++] = MLX5_GET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_id);
+ }
+ if (num != idx) {
+ rte_errno = EINVAL;
+ DRV_LOG(ERR, "Number of sample IDs are not as expected.");
+ return -rte_errno;
+ }
+ return ret;
+}
+
+
+struct mlx5_devx_obj *
+mlx5_devx_cmd_create_flex_parser(void *ctx,
+ struct mlx5_devx_graph_node_attr *data)
+{
+ uint32_t in[MLX5_ST_SZ_DW(create_flex_parser_in)] = {0};
+ uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0};
+ void *hdr = MLX5_ADDR_OF(create_flex_parser_in, in, hdr);
+ void *flex = MLX5_ADDR_OF(create_flex_parser_in, in, flex);
+ void *sample = MLX5_ADDR_OF(parse_graph_flex, flex, sample_table);
+ void *in_arc = MLX5_ADDR_OF(parse_graph_flex, flex, input_arc);
+ void *out_arc = MLX5_ADDR_OF(parse_graph_flex, flex, output_arc);
+ struct mlx5_devx_obj *parse_flex_obj = NULL;
+ uint32_t i;
+
+ parse_flex_obj = rte_calloc(__func__, 1, sizeof(*parse_flex_obj), 0);
+ if (!parse_flex_obj) {
+ DRV_LOG(ERR, "Failed to allocate flex parser data");
+ rte_errno = ENOMEM;
+ rte_free(in);
+ return NULL;
+ }
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+ MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+ MLX5_GENERAL_OBJ_TYPE_FLEX_PARSE_GRAPH);
+ MLX5_SET(parse_graph_flex, flex, header_length_mode,
+ data->header_length_mode);
+ MLX5_SET(parse_graph_flex, flex, header_length_base_value,
+ data->header_length_base_value);
+ MLX5_SET(parse_graph_flex, flex, header_length_field_offset,
+ data->header_length_field_offset);
+ MLX5_SET(parse_graph_flex, flex, header_length_field_shift,
+ data->header_length_field_shift);
+ MLX5_SET(parse_graph_flex, flex, header_length_field_mask,
+ data->header_length_field_mask);
+ for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
+ struct mlx5_devx_match_sample_attr *s = &data->sample[i];
+ void *s_off = (void *)((char *)sample + i *
+ MLX5_ST_SZ_BYTES(parse_graph_flow_match_sample));
+
+ if (!s->flow_match_sample_en)
+ continue;
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_en, !!s->flow_match_sample_en);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_offset,
+ s->flow_match_sample_field_offset);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_offset_mode,
+ s->flow_match_sample_offset_mode);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_offset_mask,
+ s->flow_match_sample_field_offset_mask);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_offset_shift,
+ s->flow_match_sample_field_offset_shift);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_base_offset,
+ s->flow_match_sample_field_base_offset);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_tunnel_mode,
+ s->flow_match_sample_tunnel_mode);
+ }
+ for (i = 0; i < MLX5_GRAPH_NODE_ARC_NUM; i++) {
+ struct mlx5_devx_graph_arc_attr *ia = &data->in[i];
+ struct mlx5_devx_graph_arc_attr *oa = &data->out[i];
+ void *in_off = (void *)((char *)in_arc + i *
+ MLX5_ST_SZ_BYTES(parse_graph_arc));
+ void *out_off = (void *)((char *)out_arc + i *
+ MLX5_ST_SZ_BYTES(parse_graph_arc));
+
+ if (ia->arc_parse_graph_node != 0) {
+ MLX5_SET(parse_graph_arc, in_off,
+ compare_condition_value,
+ ia->compare_condition_value);
+ MLX5_SET(parse_graph_arc, in_off, start_inner_tunnel,
+ ia->start_inner_tunnel);
+ MLX5_SET(parse_graph_arc, in_off, arc_parse_graph_node,
+ ia->arc_parse_graph_node);
+ MLX5_SET(parse_graph_arc, in_off,
+ parse_graph_node_handle,
+ ia->parse_graph_node_handle);
+ }
+ if (oa->arc_parse_graph_node != 0) {
+ MLX5_SET(parse_graph_arc, out_off,
+ compare_condition_value,
+ oa->compare_condition_value);
+ MLX5_SET(parse_graph_arc, out_off, start_inner_tunnel,
+ oa->start_inner_tunnel);
+ MLX5_SET(parse_graph_arc, out_off, arc_parse_graph_node,
+ oa->arc_parse_graph_node);
+ MLX5_SET(parse_graph_arc, out_off,
+ parse_graph_node_handle,
+ oa->parse_graph_node_handle);
+ }
+ }
+ parse_flex_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in),
+ out, sizeof(out));
+ if (!parse_flex_obj->obj) {
+ rte_errno = errno;
+ DRV_LOG(ERR, "Failed to create FLEX PARSE GRAPH object "
+ "by using DevX.");
+ rte_free(parse_flex_obj);
+ return NULL;
+ }
+ parse_flex_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
+ return parse_flex_obj;
+}
+
/**
* Query HCA attributes.
* Using those attributes we can check on run time if the device
@@ -467,6 +628,9 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
attr->vdpa.queue_counters_valid = !!(MLX5_GET64(cmd_hca_cap, hcattr,
general_obj_types) &
MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS);
+ attr->parse_graph_flex_node = !!(MLX5_GET64(cmd_hca_cap, hcattr,
+ general_obj_types) &
+ MLX5_GENERAL_OBJ_TYPES_CAP_PARSE_GRAPH_FLEX_NODE);
if (attr->qos.sup) {
MLX5_SET(query_hca_cap_in, in, op_mod,
MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP |
@@ -1024,7 +1188,7 @@ mlx5_devx_cmd_modify_sq(struct mlx5_devx_obj *sq,
if (ret) {
DRV_LOG(ERR, "Failed to modify SQ using DevX");
rte_errno = errno;
- return -errno;
+ return -rte_errno;
}
return ret;
}
@@ -1337,7 +1501,7 @@ mlx5_devx_cmd_modify_virtq(struct mlx5_devx_obj *virtq_obj,
if (ret) {
DRV_LOG(ERR, "Failed to modify VIRTQ using DevX.");
rte_errno = errno;
- return -errno;
+ return -rte_errno;
}
return ret;
}
@@ -1540,7 +1704,7 @@ mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp, uint32_t qp_st_mod_op,
if (ret) {
DRV_LOG(ERR, "Failed to modify QP using DevX.");
rte_errno = errno;
- return -errno;
+ return -rte_errno;
}
return ret;
}
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index faabfb1..09edba9 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -68,6 +68,7 @@ struct mlx5_hca_attr {
uint32_t eswitch_manager:1;
uint32_t flow_counters_dump:1;
uint32_t log_max_rqt_size:5;
+ uint32_t parse_graph_flex_node:1;
uint8_t flow_counter_bulk_alloc_bitmap;
uint32_t eth_net_offloads:1;
uint32_t eth_virt:1;
@@ -416,6 +417,13 @@ int mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp,
__rte_internal
int mlx5_devx_cmd_modify_rqt(struct mlx5_devx_obj *rqt,
struct mlx5_devx_rqt_attr *rqt_attr);
+__rte_internal
+int mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
+ uint32_t ids[], uint32_t num);
+
+__rte_internal
+struct mlx5_devx_obj *mlx5_devx_cmd_create_flex_parser(void *ctx,
+ struct mlx5_devx_graph_node_attr *data);
/**
* Create virtio queue counters object DevX API.
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index e2a8e94..8030d18 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -961,10 +961,9 @@ enum {
MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1,
};
-enum {
- MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q = (1ULL << 0xd),
- MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS = (1ULL << 0x1c),
-};
+#define MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q (1ULL << 0xd)
+#define MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS (1ULL << 0x1c)
+#define MLX5_GENERAL_OBJ_TYPES_CAP_PARSE_GRAPH_FLEX_NODE (1ULL << 0x22)
enum {
MLX5_HCA_CAP_OPMOD_GET_MAX = 0,
@@ -2022,6 +2021,7 @@ struct mlx5_ifc_create_cq_in_bits {
enum {
MLX5_GENERAL_OBJ_TYPE_VIRTQ = 0x000d,
MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS = 0x001c,
+ MLX5_GENERAL_OBJ_TYPE_FLEX_PARSE_GRAPH = 0x0022,
};
struct mlx5_ifc_general_obj_in_cmd_hdr_bits {
@@ -2500,6 +2500,67 @@ struct mlx5_ifc_query_qp_in_bits {
u8 reserved_at_60[0x20];
};
+struct mlx5_ifc_parse_graph_arc_bits {
+ u8 start_inner_tunnel[0x1];
+ u8 reserved_at_1[0x7];
+ u8 arc_parse_graph_node[0x8];
+ u8 compare_condition_value[0x10];
+ u8 parse_graph_node_handle[0x20];
+ u8 reserved_at_40[0x40];
+};
+
+struct mlx5_ifc_parse_graph_flow_match_sample_bits {
+ u8 flow_match_sample_en[0x1];
+ u8 reserved_at_1[0x3];
+ u8 flow_match_sample_offset_mode[0x4];
+ u8 reserved_at_5[0x8];
+ u8 flow_match_sample_field_offset[0x10];
+ u8 reserved_at_32[0x4];
+ u8 flow_match_sample_field_offset_shift[0x4];
+ u8 flow_match_sample_field_base_offset[0x8];
+ u8 reserved_at_48[0xd];
+ u8 flow_match_sample_tunnel_mode[0x3];
+ u8 flow_match_sample_field_offset_mask[0x20];
+ u8 flow_match_sample_field_id[0x20];
+};
+
+struct mlx5_ifc_parse_graph_flex_bits {
+ u8 modify_field_select[0x40];
+ u8 reserved_at_64[0x20];
+ u8 header_length_base_value[0x10];
+ u8 reserved_at_112[0x4];
+ u8 header_length_field_shift[0x4];
+ u8 reserved_at_120[0x4];
+ u8 header_length_mode[0x4];
+ u8 header_length_field_offset[0x10];
+ u8 next_header_field_offset[0x10];
+ u8 reserved_at_160[0x1b];
+ u8 next_header_field_size[0x5];
+ u8 header_length_field_mask[0x20];
+ u8 reserved_at_224[0x20];
+ struct mlx5_ifc_parse_graph_flow_match_sample_bits sample_table[0x8];
+ struct mlx5_ifc_parse_graph_arc_bits input_arc[0x8];
+ struct mlx5_ifc_parse_graph_arc_bits output_arc[0x8];
+};
+
+struct mlx5_ifc_create_flex_parser_in_bits {
+ struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+ struct mlx5_ifc_parse_graph_flex_bits flex;
+};
+
+struct mlx5_ifc_create_flex_parser_out_bits {
+ struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+ struct mlx5_ifc_parse_graph_flex_bits flex;
+};
+
+struct mlx5_ifc_parse_graph_flex_out_bits {
+ u8 status[0x8];
+ u8 reserved_at_8[0x18];
+ u8 syndrome[0x20];
+ u8 reserved_at_40[0x40];
+ struct mlx5_ifc_parse_graph_flex_bits capability;
+};
+
/* CQE format mask. */
#define MLX5E_CQE_FORMAT_MASK 0xc
diff --git a/drivers/common/mlx5/rte_common_mlx5_version.map b/drivers/common/mlx5/rte_common_mlx5_version.map
index ae57ebd..c86497f 100644
--- a/drivers/common/mlx5/rte_common_mlx5_version.map
+++ b/drivers/common/mlx5/rte_common_mlx5_version.map
@@ -11,6 +11,7 @@ INTERNAL {
mlx5_dev_to_pci_addr;
mlx5_devx_cmd_create_cq;
+ mlx5_devx_cmd_create_flex_parser;
mlx5_devx_cmd_create_qp;
mlx5_devx_cmd_create_rq;
mlx5_devx_cmd_create_rqt;
@@ -32,6 +33,7 @@ INTERNAL {
mlx5_devx_cmd_modify_virtq;
mlx5_devx_cmd_qp_query_tis_td;
mlx5_devx_cmd_query_hca_attr;
+ mlx5_devx_cmd_query_parse_samples;
mlx5_devx_cmd_query_virtio_q_counters;
mlx5_devx_cmd_query_virtq;
mlx5_devx_get_out_command_status;
--
2.5.5
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v2 5/7] net/mlx5: create and destroy eCPRI flex parser
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 0/7] add eCPRI support in mlx5 driver Bing Zhao
` (3 preceding siblings ...)
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 4/7] common/mlx5: adding DevX command for flex parsers Bing Zhao
@ 2020-07-16 13:49 ` Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 6/7] net/mlx5: add eCPRI flex parser capacity check Bing Zhao
` (2 subsequent siblings)
7 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-16 13:49 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
eCPRI protocol has unified format layout for the variants, over
ETH layer (including .1Q) and UDP layer.
The common header of the message has 4 bytes fixed length, and the
message payload layers are different based on the type field. Now
only type #0, #2 and #5 will be supported, and 2 bytes are needed.
When creating the flex parser, the header will be extended to 8
bytes and 2 DW samples are needed. The 1st DW starts from offset 0
and will be used for the type field of the common header. The 2nd
DW starts from offset 4 and will be used for the physical channel
ID, real-time control ID or measurement ID fields.
The parser will be created once a flow with eCPRI item is observed
for the first time. After creating, it will remain in the system
and HW until the device is stopped. Right now, there is no need to
destroy the eCPRI flex parser after the last flow with eCPRI item
is destroyed. This is to get rid of the alternate states of creating
and destroying eCPRI flex parser with a single eCPRI flow.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 55 ++++++++++++++++++++++++++++++++++++++---
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_flow_dv.c | 3 ++-
3 files changed, 55 insertions(+), 4 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index daa9467..aec0173 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -630,8 +630,49 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_flex_parser_profiles *prf =
&priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
+ struct mlx5_devx_graph_node_attr node = {
+ .modify_field_select = 0,
+ };
+ uint32_t ids[8];
+ int ret;
- (void)prf;
+ node.header_length_mode = MLX5_GRAPH_NODE_LEN_FIXED;
+ /* 8 bytes now: 4B common header + 4B message body header. */
+ node.header_length_base_value = 0x8;
+ /* After MAC layer: Ether / VLAN. */
+ node.in[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_MAC;
+ /* Type of compared condition should be 0xAEFE in the L2 layer. */
+ node.in[0].compare_condition_value = RTE_ETHER_TYPE_ECPRI;
+ /* Sample #0: type in common header. */
+ node.sample[0].flow_match_sample_en = 1;
+ /* Fixed offset. */
+ node.sample[0].flow_match_sample_offset_mode = 0x0;
+ /* Only the 2nd byte will be used. */
+ node.sample[0].flow_match_sample_field_base_offset = 0x0;
+ /* Sample #1: message payload. */
+ node.sample[1].flow_match_sample_en = 1;
+ /* Fixed offset. */
+ node.sample[1].flow_match_sample_offset_mode = 0x0;
+ /*
+ * Only the first two bytes will be used right now, and its offset will
+ * start after the common header that with the length of a DW(u32).
+ */
+ node.sample[1].flow_match_sample_field_base_offset = sizeof(uint32_t);
+ prf->obj = mlx5_devx_cmd_create_flex_parser(priv->sh->ctx, &node);
+ if (!prf->obj) {
+ DRV_LOG(ERR, "Failed to create flex parser node object.");
+ return (rte_errno == 0) ? -ENODEV : -rte_errno;
+ }
+ prf->num = 2;
+ ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num);
+ if (ret) {
+ DRV_LOG(ERR, "Failed to query sample IDs.");
+ return (rte_errno == 0) ? -ENODEV : -rte_errno;
+ }
+ prf->offset[0] = 0x0;
+ prf->offset[1] = sizeof(uint32_t);
+ prf->ids[0] = ids[0];
+ prf->ids[1] = ids[1];
return 0;
}
@@ -643,9 +684,15 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
* Pointer to Ethernet device structure.
*/
static void
-flow_dv_flex_parser_ecpri_release(struct rte_eth_dev *dev)
+mlx5_flex_parser_ecpri_release(struct rte_eth_dev *dev)
{
- (void)dev;
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_flex_parser_profiles *prf =
+ &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
+
+ if (prf->obj)
+ mlx5_devx_cmd_destroy(prf->obj);
+ prf->obj = NULL;
}
/**
@@ -1209,6 +1256,8 @@ mlx5_dev_close(struct rte_eth_dev *dev)
rte_wmb();
/* Disable datapath on secondary process. */
mlx5_mp_req_stop_rxtx(dev);
+ /* Free the eCPRI flex parser resource. */
+ mlx5_flex_parser_ecpri_release(dev);
if (priv->rxqs != NULL) {
/* XXX race condition if mlx5_rx_burst() is still running. */
usleep(1000);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index c1b30be..42a8bd5 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -959,4 +959,5 @@ int mlx5_os_get_stats_n(struct rte_eth_dev *dev);
void mlx5_os_stats_init(struct rte_eth_dev *dev);
void mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t *reg_mr_cb,
mlx5_dereg_mr_t *dereg_mr_cb);
+
#endif /* RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index cd2b0f0..ceb585d 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -8662,10 +8662,11 @@ __flow_dv_translate(struct rte_eth_dev *dev,
break;
case RTE_FLOW_ITEM_TYPE_ECPRI:
if (!mlx5_flex_parser_ecpri_exist(dev)) {
+ /* Create it only the first time to be used. */
ret = mlx5_flex_parser_ecpri_alloc(dev);
if (ret)
return rte_flow_error_set
- (error, ret,
+ (error, -ret,
RTE_FLOW_ERROR_TYPE_ITEM,
NULL,
"cannot create eCPRI parser");
--
2.5.5
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v2 6/7] net/mlx5: add eCPRI flex parser capacity check
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 0/7] add eCPRI support in mlx5 driver Bing Zhao
` (4 preceding siblings ...)
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 5/7] net/mlx5: create and destroy eCPRI flex parser Bing Zhao
@ 2020-07-16 13:49 ` Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 7/7] doc: update release notes and guides for eCPRI Bing Zhao
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 0/7] add eCPRI support in mlx5 driver Bing Zhao
7 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-16 13:49 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
If the NIC or the FW does not support the dynamic flex parser,
it will return error when trying to create the parser for eCRPI.
Then it is hard to know the detail error reason of the failure.
Before creating the parser node and the following usage of the
parser, the capacity bit saved in the HCA_CAP could be used to
confirm if the dynamic flex parser is supported.
If no, an error will be returned directly with ENOTSUP to prevent
the following steps to be executed.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index aec0173..eaa2d3e 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -636,6 +636,11 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
uint32_t ids[8];
int ret;
+ if (!priv->caps.parse_graph_flex_node) {
+ DRV_LOG(ERR, "Dynamic flex parser is not supported "
+ "for device %s.", priv->dev_data->name);
+ return -ENOTSUP;
+ }
node.header_length_mode = MLX5_GRAPH_NODE_LEN_FIXED;
/* 8 bytes now: 4B common header + 4B message body header. */
node.header_length_base_value = 0x8;
--
2.5.5
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v2 7/7] doc: update release notes and guides for eCPRI
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 0/7] add eCPRI support in mlx5 driver Bing Zhao
` (5 preceding siblings ...)
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 6/7] net/mlx5: add eCPRI flex parser capacity check Bing Zhao
@ 2020-07-16 13:49 ` Bing Zhao
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 0/7] add eCPRI support in mlx5 driver Bing Zhao
7 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-16 13:49 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
Update the release notes of mlx5 PMD part by adding the
support of eCPRI.
Update the firmware configuration in the mlx5 NIC guide to support
the usage of eCPRI.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
doc/guides/nics/mlx5.rst | 5 +++++
doc/guides/rel_notes/release_20_08.rst | 1 +
2 files changed, 6 insertions(+)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4b6d8fb..191ec04 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -956,6 +956,11 @@ Below are some firmware configurations listed.
FLEX_PARSER_PROFILE_ENABLE=3
+- enable eCPRI flow matching::
+
+ FLEX_PARSER_PROFILE_ENABLE=4
+ PROG_PARSE_GRAPH=1
+
Prerequisites
-------------
diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index f19b748..6f44ffd 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -122,6 +122,7 @@ New Features
* Added new PMD devarg ``reclaim_mem_mode``.
* Added new devarg ``lacp_by_user``.
+ * Added support for eCPRI protocol offloading.
* **Added vDPA device APIs to query virtio queue statistics.**
--
2.5.5
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v3 0/7] add eCPRI support in mlx5 driver
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 0/7] add eCPRI support in mlx5 driver Bing Zhao
` (6 preceding siblings ...)
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 7/7] doc: update release notes and guides for eCPRI Bing Zhao
@ 2020-07-16 14:23 ` Bing Zhao
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 1/7] net/mlx5: add flow validation of eCPRI header Bing Zhao
` (7 more replies)
7 siblings, 8 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-16 14:23 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
This patch set is to add the eCPRI support of flow rules in mlx5 PMD
driver. Right now, only eCPRI over Ethernet layer (including VLAN)
is supported. eCPRI over UDP will be supported in the future. If the
flow rule to be inserted is not supported, PMD driver will return
error to indicate the reason of the failure.
v2: listed as below
1. added document updates
2. add NIC / FW capacity check
3. fix mask of type in common header check and code cleanup
v3: fix the wrong member name in the private structure
Bing Zhao (7):
net/mlx5: add flow validation of eCPRI header
net/mlx5: add flow translation of eCPRI header
common/mlx5: add flex parser DevX structures
common/mlx5: adding DevX command for flex parsers
net/mlx5: create and destroy eCPRI flex parser
net/mlx5: add eCPRI flex parser capacity check
doc: update release notes and guides for eCPRI
doc/guides/nics/mlx5.rst | 5 +
doc/guides/rel_notes/release_20_08.rst | 1 +
drivers/common/mlx5/mlx5_devx_cmds.c | 170 +++++++++++++++++++++++-
drivers/common/mlx5/mlx5_devx_cmds.h | 52 ++++++++
drivers/common/mlx5/mlx5_prm.h | 115 +++++++++++++++-
drivers/common/mlx5/rte_common_mlx5_version.map | 2 +
drivers/net/mlx5/mlx5.c | 107 +++++++++++++++
drivers/net/mlx5/mlx5.h | 19 +++
drivers/net/mlx5/mlx5_flow.c | 107 ++++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 9 ++
drivers/net/mlx5/mlx5_flow_dv.c | 126 ++++++++++++++++++
11 files changed, 704 insertions(+), 9 deletions(-)
--
2.5.5
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v3 1/7] net/mlx5: add flow validation of eCPRI header
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 0/7] add eCPRI support in mlx5 driver Bing Zhao
@ 2020-07-16 14:23 ` Bing Zhao
2020-07-16 15:04 ` Slava Ovsiienko
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 2/7] net/mlx5: add flow translation " Bing Zhao
` (6 subsequent siblings)
7 siblings, 1 reply; 40+ messages in thread
From: Bing Zhao @ 2020-07-16 14:23 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
When creating a flow with eCPRI header item, the validation of it is
mandatory. The detailed limitations are listed below:
1. Over Ether / VLAN, ethertype must be 0xAEFE.
2. No tunnel support is described in the specification now.
3. L3 layer is only supported when L4 is UDP, see #4.
4. Over TCP is not supported from the specification, and over UDP
is not supported right now.
5. Concatenation indicator matching is not supported now.
6. No need to check the revision.
7. Only type field in the common header is mandatory, and one byte
should be matched integrally.
8. Fields in the message payload header are optional.
9. Only messages with type #0, #2 and #5 are supported now.
Some limitations are only from software right now, because there is
no need to support all the message types and variants of protocol
stack listed in the specification.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 107 +++++++++++++++++++++++++++++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 9 ++++
drivers/net/mlx5/mlx5_flow_dv.c | 23 +++++++++
3 files changed, 138 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index ae5ccc2..12d80b5 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1227,11 +1227,17 @@ mlx5_flow_validate_action_rss(const struct rte_flow_action *action,
RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
"rss action not supported for "
"egress");
- if (rss->level > 1 && !tunnel)
+ if (rss->level > 1 && !tunnel)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
"inner RSS is not supported for "
"non-tunnel flows");
+ if ((item_flags & MLX5_FLOW_LAYER_ECPRI) &&
+ !(item_flags & MLX5_FLOW_LAYER_INNER_L4_UDP)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
+ "RSS on eCPRI is not supported now");
+ }
return 0;
}
@@ -1597,6 +1603,10 @@ mlx5_flow_validate_item_vlan(const struct rte_flow_item *item,
* Item specification.
* @param[in] item_flags
* Bit-fields that holds the items detected until now.
+ * @param[in] last_item
+ * Previous validated item in the pattern items.
+ * @param[in] ether_type
+ * Type in the ethernet layer header (including dot1q).
* @param[in] acc_mask
* Acceptable mask, if NULL default internal default mask
* will be used to check whether item fields are supported.
@@ -1695,6 +1705,10 @@ mlx5_flow_validate_item_ipv4(const struct rte_flow_item *item,
* Item specification.
* @param[in] item_flags
* Bit-fields that holds the items detected until now.
+ * @param[in] last_item
+ * Previous validated item in the pattern items.
+ * @param[in] ether_type
+ * Type in the ethernet layer header (including dot1q).
* @param[in] acc_mask
* Acceptable mask, if NULL default internal default mask
* will be used to check whether item fields are supported.
@@ -2357,6 +2371,97 @@ mlx5_flow_validate_item_nvgre(const struct rte_flow_item *item,
return 0;
}
+/**
+ * Validate eCPRI item.
+ *
+ * @param[in] item
+ * Item specification.
+ * @param[in] item_flags
+ * Bit-fields that holds the items detected until now.
+ * @param[in] last_item
+ * Previous validated item in the pattern items.
+ * @param[in] ether_type
+ * Type in the ethernet layer header (including dot1q).
+ * @param[in] acc_mask
+ * Acceptable mask, if NULL default internal default mask
+ * will be used to check whether item fields are supported.
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_flow_validate_item_ecpri(const struct rte_flow_item *item,
+ uint64_t item_flags,
+ uint64_t last_item,
+ uint16_t ether_type,
+ const struct rte_flow_item_ecpri *acc_mask,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_ecpri *mask = item->mask;
+ const struct rte_flow_item_ecpri nic_mask = {
+ .hdr = {
+ .common = {
+ .u32 =
+ RTE_BE32(((const struct rte_ecpri_common_hdr) {
+ .type = 0xFF,
+ }).u32),
+ },
+ .dummy[0] = 0xFFFFFFFF,
+ },
+ };
+ const uint64_t outer_l2_vlan = (MLX5_FLOW_LAYER_OUTER_L2 |
+ MLX5_FLOW_LAYER_OUTER_VLAN);
+ struct rte_flow_item_ecpri mask_lo;
+
+ if ((last_item & outer_l2_vlan) && ether_type &&
+ ether_type != RTE_ETHER_TYPE_ECPRI)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "eCPRI cannot follow L2/VLAN layer "
+ "which ether type is not 0xAEFE.");
+ if (item_flags & MLX5_FLOW_LAYER_TUNNEL)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "eCPRI with tunnel is not supported "
+ "right now.");
+ if (item_flags & MLX5_FLOW_LAYER_OUTER_L3)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "multiple L3 layers not supported");
+ else if (item_flags & MLX5_FLOW_LAYER_OUTER_L4_TCP)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "eCPRI cannot follow a TCP layer.");
+ /* In specification, eCPRI could be over UDP layer. */
+ else if (item_flags & MLX5_FLOW_LAYER_OUTER_L4_UDP)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "eCPRI over UDP layer is not yet "
+ "supported right now.");
+ /* Mask for type field in common header could be zero. */
+ if (!mask)
+ mask = &rte_flow_item_ecpri_mask;
+ mask_lo.hdr.common.u32 = rte_be_to_cpu_32(mask->hdr.common.u32);
+ /* Input mask is in big-endian format. */
+ if (mask_lo.hdr.common.type != 0 && mask_lo.hdr.common.type != 0xff)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
+ "partial mask is not supported "
+ "for protocol");
+ else if (mask_lo.hdr.common.type == 0 && mask->hdr.dummy[0] != 0)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
+ "message header mask must be after "
+ "a type mask");
+ return mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
+ acc_mask ? (const uint8_t *)acc_mask
+ : (const uint8_t *)&nic_mask,
+ sizeof(struct rte_flow_item_ecpri),
+ error);
+}
+
/* Allocate unique ID for the split Q/RSS subflows. */
static uint32_t
flow_qrss_get_id(struct rte_eth_dev *dev)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 43cbda8..6dfeef3 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -128,6 +128,9 @@ enum mlx5_feature_name {
/* Pattern tunnel Layer bits (continued). */
#define MLX5_FLOW_LAYER_GTP (1u << 28)
+/* Pattern eCPRI Layer bit. */
+#define MLX5_FLOW_LAYER_ECPRI (UINT64_C(1) << 29)
+
/* Outer Masks. */
#define MLX5_FLOW_LAYER_OUTER_L3 \
(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6)
@@ -1027,6 +1030,12 @@ int mlx5_flow_validate_item_geneve(const struct rte_flow_item *item,
uint64_t item_flags,
struct rte_eth_dev *dev,
struct rte_flow_error *error);
+int mlx5_flow_validate_item_ecpri(const struct rte_flow_item *item,
+ uint64_t item_flags,
+ uint64_t last_item,
+ uint16_t ether_type,
+ const struct rte_flow_item_ecpri *acc_mask,
+ struct rte_flow_error *error);
struct mlx5_meter_domains_infos *mlx5_flow_create_mtr_tbls
(struct rte_eth_dev *dev,
const struct mlx5_flow_meter *fm);
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 8b5b683..f042a42 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -4923,6 +4923,17 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
.hop_limits = 0xff,
},
};
+ const struct rte_flow_item_ecpri nic_ecpri_mask = {
+ .hdr = {
+ .common = {
+ .u32 =
+ RTE_BE32(((const struct rte_ecpri_common_hdr) {
+ .type = 0xFF,
+ }).u32),
+ },
+ .dummy[0] = 0xffffffff,
+ },
+ };
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_dev_config *dev_conf = &priv->config;
uint16_t queue_index = 0xFFFF;
@@ -5173,6 +5184,17 @@ flow_dv_validate(struct rte_eth_dev *dev, const struct rte_flow_attr *attr,
return ret;
last_item = MLX5_FLOW_LAYER_GTP;
break;
+ case RTE_FLOW_ITEM_TYPE_ECPRI:
+ /* Capacity will be checked in the translate stage. */
+ ret = mlx5_flow_validate_item_ecpri(items, item_flags,
+ last_item,
+ ether_type,
+ &nic_ecpri_mask,
+ error);
+ if (ret < 0)
+ return ret;
+ last_item = MLX5_FLOW_LAYER_ECPRI;
+ break;
default:
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ITEM,
@@ -5882,6 +5904,7 @@ flow_dv_translate_item_eth(void *matcher, void *key,
* Set match on ethertype only if ETH header is not followed by VLAN.
* HW is optimized for IPv4/IPv6. In such cases, avoid setting
* ethertype, and use ip_version field instead.
+ * eCPRI over Ether layer will use type value 0xAEFE.
*/
if (eth_v->type == RTE_BE16(RTE_ETHER_TYPE_IPV4) &&
eth_m->type == 0xFFFF) {
--
2.5.5
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v3 2/7] net/mlx5: add flow translation of eCPRI header
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 0/7] add eCPRI support in mlx5 driver Bing Zhao
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 1/7] net/mlx5: add flow validation of eCPRI header Bing Zhao
@ 2020-07-16 14:23 ` Bing Zhao
2020-07-16 15:04 ` Slava Ovsiienko
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 3/7] common/mlx5: add flex parser DevX structures Bing Zhao
` (5 subsequent siblings)
7 siblings, 1 reply; 40+ messages in thread
From: Bing Zhao @ 2020-07-16 14:23 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
In the translation stage, the eCPRI item should be translated into
the format that lower layer driver could use. All the fields that
need to match must be in network byte order after translation, as
well as the mask. Since the header in the item belongs to the network
layers stack, and the input parameter of the header is considered to
be in big-endian format already.
Base on the definition in the PRM, the DW samples will be used for
matching in the FTE/STE. Now, the type field and only the PC ID, RTC
ID, and DLY MSR ID of the payload will be supported. The masks should
be 00 ff 00 00 ff ff(00) 00 00 in the network order. Two DWs are
needed to support such matching. The mask fields could be zeros to
support some wildcard rules. But it makes no sense to support the
rule matching only on the payload but without matching type field.
The DW samples should be stored after the flex parser creation for
eCPRI. There is no need to query the sample IDs each time when
creating a flow rule with eCPRI item. It will not introduce
insertion rate degradation significantly.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
v2: fix the endianess issue of type mask field checking.
---
drivers/common/mlx5/mlx5_prm.h | 16 ++++++-
drivers/net/mlx5/mlx5.c | 53 +++++++++++++++++++++
drivers/net/mlx5/mlx5.h | 18 +++++++
drivers/net/mlx5/mlx5_flow_dv.c | 102 ++++++++++++++++++++++++++++++++++++++++
4 files changed, 188 insertions(+), 1 deletion(-)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index c63795f..e6278c0 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -709,6 +709,18 @@ struct mlx5_ifc_fte_match_set_misc3_bits {
u8 reserved_at_170[0x90];
};
+struct mlx5_ifc_fte_match_set_misc4_bits {
+ u8 prog_sample_field_value_0[0x20];
+ u8 prog_sample_field_id_0[0x20];
+ u8 prog_sample_field_value_1[0x20];
+ u8 prog_sample_field_id_1[0x20];
+ u8 prog_sample_field_value_2[0x20];
+ u8 prog_sample_field_id_2[0x20];
+ u8 prog_sample_field_value_3[0x20];
+ u8 prog_sample_field_id_3[0x20];
+ u8 reserved_at_100[0x100];
+};
+
/* Flow matcher. */
struct mlx5_ifc_fte_match_param_bits {
struct mlx5_ifc_fte_match_set_lyr_2_4_bits outer_headers;
@@ -716,6 +728,7 @@ struct mlx5_ifc_fte_match_param_bits {
struct mlx5_ifc_fte_match_set_lyr_2_4_bits inner_headers;
struct mlx5_ifc_fte_match_set_misc2_bits misc_parameters_2;
struct mlx5_ifc_fte_match_set_misc3_bits misc_parameters_3;
+ struct mlx5_ifc_fte_match_set_misc4_bits misc_parameters_4;
};
enum {
@@ -723,7 +736,8 @@ enum {
MLX5_MATCH_CRITERIA_ENABLE_MISC_BIT,
MLX5_MATCH_CRITERIA_ENABLE_INNER_BIT,
MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT,
- MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT
+ MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT,
+ MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT,
};
enum {
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 0c654ed..daa9467 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -595,6 +595,59 @@ mlx5_flow_ipool_destroy(struct mlx5_dev_ctx_shared *sh)
mlx5_ipool_destroy(sh->ipool[i]);
}
+/*
+ * Check if dynamic flex parser for eCPRI already exists.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ *
+ * @return
+ * true on exists, false on not.
+ */
+bool
+mlx5_flex_parser_ecpri_exist(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_flex_parser_profiles *prf =
+ &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
+
+ return !!prf->obj;
+}
+
+/*
+ * Allocation of a flex parser for eCPRI. Once created, this parser related
+ * resources will be held until the device is closed.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_flex_parser_profiles *prf =
+ &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
+
+ (void)prf;
+ return 0;
+}
+
+/*
+ * Destroy the flex parser node, including the parser itself, input / output
+ * arcs and DW samples. Resources could be reused then.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ */
+static void
+flow_dv_flex_parser_ecpri_release(struct rte_eth_dev *dev)
+{
+ (void)dev;
+}
+
/**
* Allocate shared device context. If there is multiport device the
* master and representors will share this context, if there is single
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 46e66eb..b79675d 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -529,6 +529,20 @@ struct mlx5_flow_id_pool {
uint32_t max_id; /**< Maximum id can be allocated from the pool. */
};
+/* Supported flex parser profile ID. */
+enum mlx5_flex_parser_profile_id {
+ MLX5_FLEX_PARSER_ECPRI_0 = 0,
+ MLX5_FLEX_PARSER_MAX = 8,
+};
+
+/* Sample ID information of flex parser structure. */
+struct mlx5_flex_parser_profiles {
+ uint32_t num; /* Actual number of samples. */
+ uint32_t ids[8]; /* Sample IDs for this profile. */
+ uint8_t offset[8]; /* Bytes offset of each parser. */
+ void *obj; /* Flex parser node object. */
+};
+
/*
* Shared Infiniband device context for Master/Representors
* which belong to same IB device with multiple IB ports.
@@ -579,6 +593,8 @@ struct mlx5_dev_ctx_shared {
struct mlx5_devx_obj *tis; /* TIS object. */
struct mlx5_devx_obj *td; /* Transport domain. */
struct mlx5_flow_id_pool *flow_id_pool; /* Flow ID pool. */
+ struct mlx5_flex_parser_profiles fp[MLX5_FLEX_PARSER_MAX];
+ /* Flex parser profiles information. */
struct mlx5_dev_shared_port port[]; /* per device port data array. */
};
@@ -718,6 +734,8 @@ int mlx5_fw_version_get(struct rte_eth_dev *dev, char *fw_ver, size_t fw_size);
int mlx5_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu);
int mlx5_hairpin_cap_get(struct rte_eth_dev *dev,
struct rte_eth_hairpin_cap *cap);
+bool mlx5_flex_parser_ecpri_exist(struct rte_eth_dev *dev);
+int mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev);
/* mlx5_ethdev.c */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index f042a42..cd2b0f0 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -7259,6 +7259,90 @@ flow_dv_translate_item_gtp(void *matcher, void *key,
rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid));
}
+/**
+ * Add eCPRI item to matcher and to the value.
+ *
+ * @param[in] dev
+ * The devich to configure through.
+ * @param[in, out] matcher
+ * Flow matcher.
+ * @param[in, out] key
+ * Flow matcher value.
+ * @param[in] item
+ * Flow pattern to translate.
+ * @param[in] samples
+ * Sample IDs to be used in the matching.
+ */
+static void
+flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher,
+ void *key, const struct rte_flow_item *item)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ const struct rte_flow_item_ecpri *ecpri_m = item->mask;
+ const struct rte_flow_item_ecpri *ecpri_v = item->spec;
+ void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher,
+ misc_parameters_4);
+ void *misc4_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_4);
+ uint32_t *samples;
+ void *dw_m;
+ void *dw_v;
+
+ if (!ecpri_v)
+ return;
+ if (!ecpri_m)
+ ecpri_m = &rte_flow_item_ecpri_mask;
+ /*
+ * Maximal four DW samples are supported in a single matching now.
+ * Two are used now for a eCPRI matching:
+ * 1. Type: one byte, mask should be 0x00ff0000 in network order
+ * 2. ID of a message: one or two bytes, mask 0xffff0000 or 0xff000000
+ * if any.
+ */
+ if (!ecpri_m->hdr.common.u32)
+ return;
+ samples = priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0].ids;
+ /* Need to take the whole DW as the mask to fill the entry. */
+ dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m,
+ prog_sample_field_value_0);
+ dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v,
+ prog_sample_field_value_0);
+ /* Already big endian (network order) in the header. */
+ *(uint32_t *)dw_m = ecpri_m->hdr.common.u32;
+ *(uint32_t *)dw_v = ecpri_v->hdr.common.u32;
+ /* Sample#0, used for matching type, offset 0. */
+ MLX5_SET(fte_match_set_misc4, misc4_m,
+ prog_sample_field_id_0, samples[0]);
+ /* It makes no sense to set the sample ID in the mask field. */
+ MLX5_SET(fte_match_set_misc4, misc4_v,
+ prog_sample_field_id_0, samples[0]);
+ /*
+ * Checking if message body part needs to be matched.
+ * Some wildcard rules only matching type field should be supported.
+ */
+ if (ecpri_m->hdr.dummy[0]) {
+ switch (ecpri_v->hdr.common.type) {
+ case RTE_ECPRI_MSG_TYPE_IQ_DATA:
+ case RTE_ECPRI_MSG_TYPE_RTC_CTRL:
+ case RTE_ECPRI_MSG_TYPE_DLY_MSR:
+ dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m,
+ prog_sample_field_value_1);
+ dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v,
+ prog_sample_field_value_1);
+ *(uint32_t *)dw_m = ecpri_m->hdr.dummy[0];
+ *(uint32_t *)dw_v = ecpri_v->hdr.dummy[0];
+ /* Sample#1, to match message body, offset 4. */
+ MLX5_SET(fte_match_set_misc4, misc4_m,
+ prog_sample_field_id_1, samples[1]);
+ MLX5_SET(fte_match_set_misc4, misc4_v,
+ prog_sample_field_id_1, samples[1]);
+ break;
+ default:
+ /* Others, do not match any sample ID. */
+ break;
+ }
+ }
+}
+
static uint32_t matcher_zero[MLX5_ST_SZ_DW(fte_match_param)] = { 0 };
#define HEADER_IS_ZERO(match_criteria, headers) \
@@ -7294,6 +7378,9 @@ flow_dv_matcher_enable(uint32_t *match_criteria)
match_criteria_enable |=
(!HEADER_IS_ZERO(match_criteria, misc_parameters_3)) <<
MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT;
+ match_criteria_enable |=
+ (!HEADER_IS_ZERO(match_criteria, misc_parameters_4)) <<
+ MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT;
return match_criteria_enable;
}
@@ -8573,6 +8660,21 @@ __flow_dv_translate(struct rte_eth_dev *dev,
MLX5_PRIORITY_MAP_L2 : MLX5_PRIORITY_MAP_L4;
last_item = MLX5_FLOW_LAYER_GTP;
break;
+ case RTE_FLOW_ITEM_TYPE_ECPRI:
+ if (!mlx5_flex_parser_ecpri_exist(dev)) {
+ ret = mlx5_flex_parser_ecpri_alloc(dev);
+ if (ret)
+ return rte_flow_error_set
+ (error, ret,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ NULL,
+ "cannot create eCPRI parser");
+ }
+ flow_dv_translate_item_ecpri(dev, match_mask,
+ match_value, items);
+ /* No other protocol should follow eCPRI layer. */
+ last_item = MLX5_FLOW_LAYER_ECPRI;
+ break;
default:
break;
}
--
2.5.5
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v3 3/7] common/mlx5: add flex parser DevX structures
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 0/7] add eCPRI support in mlx5 driver Bing Zhao
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 1/7] net/mlx5: add flow validation of eCPRI header Bing Zhao
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 2/7] net/mlx5: add flow translation " Bing Zhao
@ 2020-07-16 14:23 ` Bing Zhao
2020-07-16 15:04 ` Slava Ovsiienko
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 4/7] common/mlx5: adding DevX command for flex parsers Bing Zhao
` (4 subsequent siblings)
7 siblings, 1 reply; 40+ messages in thread
From: Bing Zhao @ 2020-07-16 14:23 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
The structures and other definitions will be used for the dynamic
flex parser creation via Devx command interface. These structures
will be used as some some intermediate variables and input
parameters for the parser creation API.
It is better to keep all members consistent with the PRM definition
even though some of them will not be used.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
drivers/common/mlx5/mlx5_devx_cmds.h | 44 ++++++++++++++++++++++++++++++++++++
drivers/common/mlx5/mlx5_prm.h | 30 ++++++++++++++++++++++++
drivers/net/mlx5/mlx5.h | 2 +-
3 files changed, 75 insertions(+), 1 deletion(-)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index 25704ef..faabfb1 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -299,6 +299,50 @@ struct mlx5_devx_virtio_q_couners_attr {
uint32_t invalid_buffer;
};
+/*
+ * graph flow match sample attributes structure,
+ * used by flex parser operations.
+ */
+struct mlx5_devx_match_sample_attr {
+ uint32_t flow_match_sample_en:1;
+ uint32_t flow_match_sample_field_offset:16;
+ uint32_t flow_match_sample_offset_mode:4;
+ uint32_t flow_match_sample_field_offset_mask;
+ uint32_t flow_match_sample_field_offset_shift:4;
+ uint32_t flow_match_sample_field_base_offset:8;
+ uint32_t flow_match_sample_tunnel_mode:3;
+ uint32_t flow_match_sample_field_id;
+};
+
+/* graph node arc attributes structure, used by flex parser operations. */
+struct mlx5_devx_graph_arc_attr {
+ uint32_t compare_condition_value:16;
+ uint32_t start_inner_tunnel:1;
+ uint32_t arc_parse_graph_node:8;
+ uint32_t parse_graph_node_handle;
+};
+
+/* Maximal number of samples per graph node. */
+#define MLX5_GRAPH_NODE_SAMPLE_NUM 8
+
+/* Maximal number of input/output arcs per graph node. */
+#define MLX5_GRAPH_NODE_ARC_NUM 8
+
+/* parse graph node attributes structure, used by flex parser operations. */
+struct mlx5_devx_graph_node_attr {
+ uint32_t modify_field_select;
+ uint32_t header_length_mode:4;
+ uint32_t header_length_base_value:16;
+ uint32_t header_length_field_shift:4;
+ uint32_t header_length_field_offset:16;
+ uint32_t header_length_field_mask;
+ struct mlx5_devx_match_sample_attr sample[MLX5_GRAPH_NODE_SAMPLE_NUM];
+ uint32_t next_header_field_offset:16;
+ uint32_t next_header_field_size:5;
+ struct mlx5_devx_graph_arc_attr in[MLX5_GRAPH_NODE_ARC_NUM];
+ struct mlx5_devx_graph_arc_attr out[MLX5_GRAPH_NODE_ARC_NUM];
+};
+
/* mlx5_devx_cmds.c */
__rte_internal
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index e6278c0..e2a8e94 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -2539,6 +2539,36 @@ enum {
/* The bits meter color use. */
#define MLX5_MTR_COLOR_BITS 8
+/* Length mode of dynamic flex parser graph node. */
+enum mlx5_parse_graph_node_len_mode {
+ MLX5_GRAPH_NODE_LEN_FIXED = 0x0,
+ MLX5_GRAPH_NODE_LEN_FIELD = 0x1,
+ MLX5_GRAPH_NODE_LEN_BITMASK = 0x2,
+};
+
+/* Offset mode of the samples of flex parser. */
+enum mlx5_parse_graph_flow_match_sample_offset_mode {
+ MLX5_GRAPH_SAMPLE_OFFSET_FIXED = 0x0,
+ MLX5_GRAPH_SAMPLE_OFFSET_FIELD = 0x1,
+ MLX5_GRAPH_SAMPLE_OFFSET_BITMASK = 0x2,
+};
+
+/* Node index for an input / output arc of the flex parser graph. */
+enum mlx5_parse_graph_arc_node_index {
+ MLX5_GRAPH_ARC_NODE_NULL = 0x0,
+ MLX5_GRAPH_ARC_NODE_HEAD = 0x1,
+ MLX5_GRAPH_ARC_NODE_MAC = 0x2,
+ MLX5_GRAPH_ARC_NODE_IP = 0x3,
+ MLX5_GRAPH_ARC_NODE_GRE = 0x4,
+ MLX5_GRAPH_ARC_NODE_UDP = 0x5,
+ MLX5_GRAPH_ARC_NODE_MPLS = 0x6,
+ MLX5_GRAPH_ARC_NODE_TCP = 0x7,
+ MLX5_GRAPH_ARC_NODE_VXLAN_GPE = 0x8,
+ MLX5_GRAPH_ARC_NODE_GENEVE = 0x9,
+ MLX5_GRAPH_ARC_NODE_IPSEC_ESP = 0xa,
+ MLX5_GRAPH_ARC_NODE_PROGRAMMABLE = 0x1f,
+};
+
/**
* Convert a user mark to flow mark.
*
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index b79675d..c1b30be 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -539,7 +539,7 @@ enum mlx5_flex_parser_profile_id {
struct mlx5_flex_parser_profiles {
uint32_t num; /* Actual number of samples. */
uint32_t ids[8]; /* Sample IDs for this profile. */
- uint8_t offset[8]; /* Bytes offset of each parser. */
+ uint8_t offset[8]; /* Bytes offset of each parser. */
void *obj; /* Flex parser node object. */
};
--
2.5.5
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v3 4/7] common/mlx5: adding DevX command for flex parsers
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 0/7] add eCPRI support in mlx5 driver Bing Zhao
` (2 preceding siblings ...)
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 3/7] common/mlx5: add flex parser DevX structures Bing Zhao
@ 2020-07-16 14:23 ` Bing Zhao
2020-07-16 15:05 ` Slava Ovsiienko
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 5/7] net/mlx5: create and destroy eCPRI flex parser Bing Zhao
` (3 subsequent siblings)
7 siblings, 1 reply; 40+ messages in thread
From: Bing Zhao @ 2020-07-16 14:23 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
In order to use dynamic flex parser to parse protocols that is not
supported natively, two steps are needed.
Firstly, creating the parse graph node. There are three parts of the
flex parser: node, arc and sample. Node is the whole structure of a
flex parser, when creating, the length of the protocol should be
specified. Then the input arc(s) is(are) mandatory, it will tell the
HW when to use this parser to parse the packet. For a single parser
node, up to 8 input arcs could be supported and it gives SW ability
to support this protocol over multiple layers. The output arc is
optional and also up to 8 arcs could be supported. If the protocol
is the last header of the stack, then output arc should be NULL. Or
else it should be specified. The protocol type in the arc is used to
indicate the parser pointing to or from this flex parser node. For
output arc, the next header type field offset and size should be set
in the node structure, then the HW could get the proper type of the
next header and decide which parser to point to.
Note: the parsers have two types now, native parser and flex parser.
The arc between two flex parsers are not supported in this stage.
Secondly, querying the sample IDs. If the protocol header parsed
with flex parser needs to used in flow rule offloading, the DW
samples are needed when creating the parse graph node. The offset
of bytes starting from the header needs to be set. After creating
the node successfully, a general object handle will be returned.
This object could be queryed with Devx command to get the sample
IDs.
When creating a flow, sample IDs could be used to sample a DW from
the parsed header - 4 continuous bytes starting from the offset. The
flow entry could specify some mask to use part of this DW for
matching. Up to 8 samples could be supported for a single parse
graph node. The offset should not exceed the header length.
The HW resources have some limitation, low layer driver error should
be checked once there is a failure of creating parse graph node.
Signed-off-by: Netanel Gonen <netanelg@mellanox.com>
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
drivers/common/mlx5/mlx5_devx_cmds.c | 170 +++++++++++++++++++++++-
drivers/common/mlx5/mlx5_devx_cmds.h | 8 ++
drivers/common/mlx5/mlx5_prm.h | 69 +++++++++-
drivers/common/mlx5/rte_common_mlx5_version.map | 2 +
4 files changed, 242 insertions(+), 7 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 2179a83..38afc0d 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -396,6 +396,167 @@ mlx5_devx_cmd_query_hca_vdpa_attr(void *ctx,
}
}
+int
+mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
+ uint32_t ids[], uint32_t num)
+{
+ uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0};
+ uint32_t out[MLX5_ST_SZ_DW(create_flex_parser_out)] = {0};
+ void *hdr = MLX5_ADDR_OF(create_flex_parser_out, in, hdr);
+ void *flex = MLX5_ADDR_OF(create_flex_parser_out, out, flex);
+ void *sample = MLX5_ADDR_OF(parse_graph_flex, flex, sample_table);
+ int ret;
+ uint32_t idx = 0;
+ uint32_t i;
+
+ if (num > MLX5_GRAPH_NODE_SAMPLE_NUM) {
+ rte_errno = EINVAL;
+ DRV_LOG(ERR, "Too many sample IDs to be fetched.");
+ return -rte_errno;
+ }
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+ MLX5_CMD_OP_QUERY_GENERAL_OBJECT);
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+ MLX5_GENERAL_OBJ_TYPE_FLEX_PARSE_GRAPH);
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_id, flex_obj->id);
+ ret = mlx5_glue->devx_obj_query(flex_obj->obj, in, sizeof(in),
+ out, sizeof(out));
+ if (ret) {
+ rte_errno = ret;
+ DRV_LOG(ERR, "Failed to query sample IDs with object %p.",
+ (void *)flex_obj);
+ return -rte_errno;
+ }
+ for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
+ void *s_off = (void *)((char *)sample + i *
+ MLX5_ST_SZ_BYTES(parse_graph_flow_match_sample));
+ uint32_t en;
+
+ en = MLX5_GET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_en);
+ if (!en)
+ continue;
+ ids[idx++] = MLX5_GET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_id);
+ }
+ if (num != idx) {
+ rte_errno = EINVAL;
+ DRV_LOG(ERR, "Number of sample IDs are not as expected.");
+ return -rte_errno;
+ }
+ return ret;
+}
+
+
+struct mlx5_devx_obj *
+mlx5_devx_cmd_create_flex_parser(void *ctx,
+ struct mlx5_devx_graph_node_attr *data)
+{
+ uint32_t in[MLX5_ST_SZ_DW(create_flex_parser_in)] = {0};
+ uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0};
+ void *hdr = MLX5_ADDR_OF(create_flex_parser_in, in, hdr);
+ void *flex = MLX5_ADDR_OF(create_flex_parser_in, in, flex);
+ void *sample = MLX5_ADDR_OF(parse_graph_flex, flex, sample_table);
+ void *in_arc = MLX5_ADDR_OF(parse_graph_flex, flex, input_arc);
+ void *out_arc = MLX5_ADDR_OF(parse_graph_flex, flex, output_arc);
+ struct mlx5_devx_obj *parse_flex_obj = NULL;
+ uint32_t i;
+
+ parse_flex_obj = rte_calloc(__func__, 1, sizeof(*parse_flex_obj), 0);
+ if (!parse_flex_obj) {
+ DRV_LOG(ERR, "Failed to allocate flex parser data");
+ rte_errno = ENOMEM;
+ rte_free(in);
+ return NULL;
+ }
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+ MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+ MLX5_GENERAL_OBJ_TYPE_FLEX_PARSE_GRAPH);
+ MLX5_SET(parse_graph_flex, flex, header_length_mode,
+ data->header_length_mode);
+ MLX5_SET(parse_graph_flex, flex, header_length_base_value,
+ data->header_length_base_value);
+ MLX5_SET(parse_graph_flex, flex, header_length_field_offset,
+ data->header_length_field_offset);
+ MLX5_SET(parse_graph_flex, flex, header_length_field_shift,
+ data->header_length_field_shift);
+ MLX5_SET(parse_graph_flex, flex, header_length_field_mask,
+ data->header_length_field_mask);
+ for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
+ struct mlx5_devx_match_sample_attr *s = &data->sample[i];
+ void *s_off = (void *)((char *)sample + i *
+ MLX5_ST_SZ_BYTES(parse_graph_flow_match_sample));
+
+ if (!s->flow_match_sample_en)
+ continue;
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_en, !!s->flow_match_sample_en);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_offset,
+ s->flow_match_sample_field_offset);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_offset_mode,
+ s->flow_match_sample_offset_mode);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_offset_mask,
+ s->flow_match_sample_field_offset_mask);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_offset_shift,
+ s->flow_match_sample_field_offset_shift);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_base_offset,
+ s->flow_match_sample_field_base_offset);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_tunnel_mode,
+ s->flow_match_sample_tunnel_mode);
+ }
+ for (i = 0; i < MLX5_GRAPH_NODE_ARC_NUM; i++) {
+ struct mlx5_devx_graph_arc_attr *ia = &data->in[i];
+ struct mlx5_devx_graph_arc_attr *oa = &data->out[i];
+ void *in_off = (void *)((char *)in_arc + i *
+ MLX5_ST_SZ_BYTES(parse_graph_arc));
+ void *out_off = (void *)((char *)out_arc + i *
+ MLX5_ST_SZ_BYTES(parse_graph_arc));
+
+ if (ia->arc_parse_graph_node != 0) {
+ MLX5_SET(parse_graph_arc, in_off,
+ compare_condition_value,
+ ia->compare_condition_value);
+ MLX5_SET(parse_graph_arc, in_off, start_inner_tunnel,
+ ia->start_inner_tunnel);
+ MLX5_SET(parse_graph_arc, in_off, arc_parse_graph_node,
+ ia->arc_parse_graph_node);
+ MLX5_SET(parse_graph_arc, in_off,
+ parse_graph_node_handle,
+ ia->parse_graph_node_handle);
+ }
+ if (oa->arc_parse_graph_node != 0) {
+ MLX5_SET(parse_graph_arc, out_off,
+ compare_condition_value,
+ oa->compare_condition_value);
+ MLX5_SET(parse_graph_arc, out_off, start_inner_tunnel,
+ oa->start_inner_tunnel);
+ MLX5_SET(parse_graph_arc, out_off, arc_parse_graph_node,
+ oa->arc_parse_graph_node);
+ MLX5_SET(parse_graph_arc, out_off,
+ parse_graph_node_handle,
+ oa->parse_graph_node_handle);
+ }
+ }
+ parse_flex_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in),
+ out, sizeof(out));
+ if (!parse_flex_obj->obj) {
+ rte_errno = errno;
+ DRV_LOG(ERR, "Failed to create FLEX PARSE GRAPH object "
+ "by using DevX.");
+ rte_free(parse_flex_obj);
+ return NULL;
+ }
+ parse_flex_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
+ return parse_flex_obj;
+}
+
/**
* Query HCA attributes.
* Using those attributes we can check on run time if the device
@@ -467,6 +628,9 @@ mlx5_devx_cmd_query_hca_attr(void *ctx,
attr->vdpa.queue_counters_valid = !!(MLX5_GET64(cmd_hca_cap, hcattr,
general_obj_types) &
MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS);
+ attr->parse_graph_flex_node = !!(MLX5_GET64(cmd_hca_cap, hcattr,
+ general_obj_types) &
+ MLX5_GENERAL_OBJ_TYPES_CAP_PARSE_GRAPH_FLEX_NODE);
if (attr->qos.sup) {
MLX5_SET(query_hca_cap_in, in, op_mod,
MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP |
@@ -1024,7 +1188,7 @@ mlx5_devx_cmd_modify_sq(struct mlx5_devx_obj *sq,
if (ret) {
DRV_LOG(ERR, "Failed to modify SQ using DevX");
rte_errno = errno;
- return -errno;
+ return -rte_errno;
}
return ret;
}
@@ -1337,7 +1501,7 @@ mlx5_devx_cmd_modify_virtq(struct mlx5_devx_obj *virtq_obj,
if (ret) {
DRV_LOG(ERR, "Failed to modify VIRTQ using DevX.");
rte_errno = errno;
- return -errno;
+ return -rte_errno;
}
return ret;
}
@@ -1540,7 +1704,7 @@ mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp, uint32_t qp_st_mod_op,
if (ret) {
DRV_LOG(ERR, "Failed to modify QP using DevX.");
rte_errno = errno;
- return -errno;
+ return -rte_errno;
}
return ret;
}
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index faabfb1..09edba9 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -68,6 +68,7 @@ struct mlx5_hca_attr {
uint32_t eswitch_manager:1;
uint32_t flow_counters_dump:1;
uint32_t log_max_rqt_size:5;
+ uint32_t parse_graph_flex_node:1;
uint8_t flow_counter_bulk_alloc_bitmap;
uint32_t eth_net_offloads:1;
uint32_t eth_virt:1;
@@ -416,6 +417,13 @@ int mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp,
__rte_internal
int mlx5_devx_cmd_modify_rqt(struct mlx5_devx_obj *rqt,
struct mlx5_devx_rqt_attr *rqt_attr);
+__rte_internal
+int mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
+ uint32_t ids[], uint32_t num);
+
+__rte_internal
+struct mlx5_devx_obj *mlx5_devx_cmd_create_flex_parser(void *ctx,
+ struct mlx5_devx_graph_node_attr *data);
/**
* Create virtio queue counters object DevX API.
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index e2a8e94..8030d18 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -961,10 +961,9 @@ enum {
MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1,
};
-enum {
- MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q = (1ULL << 0xd),
- MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS = (1ULL << 0x1c),
-};
+#define MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q (1ULL << 0xd)
+#define MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS (1ULL << 0x1c)
+#define MLX5_GENERAL_OBJ_TYPES_CAP_PARSE_GRAPH_FLEX_NODE (1ULL << 0x22)
enum {
MLX5_HCA_CAP_OPMOD_GET_MAX = 0,
@@ -2022,6 +2021,7 @@ struct mlx5_ifc_create_cq_in_bits {
enum {
MLX5_GENERAL_OBJ_TYPE_VIRTQ = 0x000d,
MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS = 0x001c,
+ MLX5_GENERAL_OBJ_TYPE_FLEX_PARSE_GRAPH = 0x0022,
};
struct mlx5_ifc_general_obj_in_cmd_hdr_bits {
@@ -2500,6 +2500,67 @@ struct mlx5_ifc_query_qp_in_bits {
u8 reserved_at_60[0x20];
};
+struct mlx5_ifc_parse_graph_arc_bits {
+ u8 start_inner_tunnel[0x1];
+ u8 reserved_at_1[0x7];
+ u8 arc_parse_graph_node[0x8];
+ u8 compare_condition_value[0x10];
+ u8 parse_graph_node_handle[0x20];
+ u8 reserved_at_40[0x40];
+};
+
+struct mlx5_ifc_parse_graph_flow_match_sample_bits {
+ u8 flow_match_sample_en[0x1];
+ u8 reserved_at_1[0x3];
+ u8 flow_match_sample_offset_mode[0x4];
+ u8 reserved_at_5[0x8];
+ u8 flow_match_sample_field_offset[0x10];
+ u8 reserved_at_32[0x4];
+ u8 flow_match_sample_field_offset_shift[0x4];
+ u8 flow_match_sample_field_base_offset[0x8];
+ u8 reserved_at_48[0xd];
+ u8 flow_match_sample_tunnel_mode[0x3];
+ u8 flow_match_sample_field_offset_mask[0x20];
+ u8 flow_match_sample_field_id[0x20];
+};
+
+struct mlx5_ifc_parse_graph_flex_bits {
+ u8 modify_field_select[0x40];
+ u8 reserved_at_64[0x20];
+ u8 header_length_base_value[0x10];
+ u8 reserved_at_112[0x4];
+ u8 header_length_field_shift[0x4];
+ u8 reserved_at_120[0x4];
+ u8 header_length_mode[0x4];
+ u8 header_length_field_offset[0x10];
+ u8 next_header_field_offset[0x10];
+ u8 reserved_at_160[0x1b];
+ u8 next_header_field_size[0x5];
+ u8 header_length_field_mask[0x20];
+ u8 reserved_at_224[0x20];
+ struct mlx5_ifc_parse_graph_flow_match_sample_bits sample_table[0x8];
+ struct mlx5_ifc_parse_graph_arc_bits input_arc[0x8];
+ struct mlx5_ifc_parse_graph_arc_bits output_arc[0x8];
+};
+
+struct mlx5_ifc_create_flex_parser_in_bits {
+ struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+ struct mlx5_ifc_parse_graph_flex_bits flex;
+};
+
+struct mlx5_ifc_create_flex_parser_out_bits {
+ struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+ struct mlx5_ifc_parse_graph_flex_bits flex;
+};
+
+struct mlx5_ifc_parse_graph_flex_out_bits {
+ u8 status[0x8];
+ u8 reserved_at_8[0x18];
+ u8 syndrome[0x20];
+ u8 reserved_at_40[0x40];
+ struct mlx5_ifc_parse_graph_flex_bits capability;
+};
+
/* CQE format mask. */
#define MLX5E_CQE_FORMAT_MASK 0xc
diff --git a/drivers/common/mlx5/rte_common_mlx5_version.map b/drivers/common/mlx5/rte_common_mlx5_version.map
index ae57ebd..c86497f 100644
--- a/drivers/common/mlx5/rte_common_mlx5_version.map
+++ b/drivers/common/mlx5/rte_common_mlx5_version.map
@@ -11,6 +11,7 @@ INTERNAL {
mlx5_dev_to_pci_addr;
mlx5_devx_cmd_create_cq;
+ mlx5_devx_cmd_create_flex_parser;
mlx5_devx_cmd_create_qp;
mlx5_devx_cmd_create_rq;
mlx5_devx_cmd_create_rqt;
@@ -32,6 +33,7 @@ INTERNAL {
mlx5_devx_cmd_modify_virtq;
mlx5_devx_cmd_qp_query_tis_td;
mlx5_devx_cmd_query_hca_attr;
+ mlx5_devx_cmd_query_parse_samples;
mlx5_devx_cmd_query_virtio_q_counters;
mlx5_devx_cmd_query_virtq;
mlx5_devx_get_out_command_status;
--
2.5.5
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v3 5/7] net/mlx5: create and destroy eCPRI flex parser
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 0/7] add eCPRI support in mlx5 driver Bing Zhao
` (3 preceding siblings ...)
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 4/7] common/mlx5: adding DevX command for flex parsers Bing Zhao
@ 2020-07-16 14:23 ` Bing Zhao
2020-07-16 15:05 ` Slava Ovsiienko
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 6/7] net/mlx5: add eCPRI flex parser capacity check Bing Zhao
` (2 subsequent siblings)
7 siblings, 1 reply; 40+ messages in thread
From: Bing Zhao @ 2020-07-16 14:23 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
eCPRI protocol has unified format layout for the variants, over
ETH layer (including .1Q) and UDP layer.
The common header of the message has 4 bytes fixed length, and the
message payload layers are different based on the type field. Now
only type #0, #2 and #5 will be supported, and 2 bytes are needed.
When creating the flex parser, the header will be extended to 8
bytes and 2 DW samples are needed. The 1st DW starts from offset 0
and will be used for the type field of the common header. The 2nd
DW starts from offset 4 and will be used for the physical channel
ID, real-time control ID or measurement ID fields.
The parser will be created once a flow with eCPRI item is observed
for the first time. After creating, it will remain in the system
and HW until the device is stopped. Right now, there is no need to
destroy the eCPRI flex parser after the last flow with eCPRI item
is destroyed. This is to get rid of the alternate states of creating
and destroying eCPRI flex parser with a single eCPRI flow.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 55 ++++++++++++++++++++++++++++++++++++++---
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_flow_dv.c | 3 ++-
3 files changed, 55 insertions(+), 4 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index daa9467..aec0173 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -630,8 +630,49 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_flex_parser_profiles *prf =
&priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
+ struct mlx5_devx_graph_node_attr node = {
+ .modify_field_select = 0,
+ };
+ uint32_t ids[8];
+ int ret;
- (void)prf;
+ node.header_length_mode = MLX5_GRAPH_NODE_LEN_FIXED;
+ /* 8 bytes now: 4B common header + 4B message body header. */
+ node.header_length_base_value = 0x8;
+ /* After MAC layer: Ether / VLAN. */
+ node.in[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_MAC;
+ /* Type of compared condition should be 0xAEFE in the L2 layer. */
+ node.in[0].compare_condition_value = RTE_ETHER_TYPE_ECPRI;
+ /* Sample #0: type in common header. */
+ node.sample[0].flow_match_sample_en = 1;
+ /* Fixed offset. */
+ node.sample[0].flow_match_sample_offset_mode = 0x0;
+ /* Only the 2nd byte will be used. */
+ node.sample[0].flow_match_sample_field_base_offset = 0x0;
+ /* Sample #1: message payload. */
+ node.sample[1].flow_match_sample_en = 1;
+ /* Fixed offset. */
+ node.sample[1].flow_match_sample_offset_mode = 0x0;
+ /*
+ * Only the first two bytes will be used right now, and its offset will
+ * start after the common header that with the length of a DW(u32).
+ */
+ node.sample[1].flow_match_sample_field_base_offset = sizeof(uint32_t);
+ prf->obj = mlx5_devx_cmd_create_flex_parser(priv->sh->ctx, &node);
+ if (!prf->obj) {
+ DRV_LOG(ERR, "Failed to create flex parser node object.");
+ return (rte_errno == 0) ? -ENODEV : -rte_errno;
+ }
+ prf->num = 2;
+ ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num);
+ if (ret) {
+ DRV_LOG(ERR, "Failed to query sample IDs.");
+ return (rte_errno == 0) ? -ENODEV : -rte_errno;
+ }
+ prf->offset[0] = 0x0;
+ prf->offset[1] = sizeof(uint32_t);
+ prf->ids[0] = ids[0];
+ prf->ids[1] = ids[1];
return 0;
}
@@ -643,9 +684,15 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
* Pointer to Ethernet device structure.
*/
static void
-flow_dv_flex_parser_ecpri_release(struct rte_eth_dev *dev)
+mlx5_flex_parser_ecpri_release(struct rte_eth_dev *dev)
{
- (void)dev;
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_flex_parser_profiles *prf =
+ &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
+
+ if (prf->obj)
+ mlx5_devx_cmd_destroy(prf->obj);
+ prf->obj = NULL;
}
/**
@@ -1209,6 +1256,8 @@ mlx5_dev_close(struct rte_eth_dev *dev)
rte_wmb();
/* Disable datapath on secondary process. */
mlx5_mp_req_stop_rxtx(dev);
+ /* Free the eCPRI flex parser resource. */
+ mlx5_flex_parser_ecpri_release(dev);
if (priv->rxqs != NULL) {
/* XXX race condition if mlx5_rx_burst() is still running. */
usleep(1000);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index c1b30be..42a8bd5 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -959,4 +959,5 @@ int mlx5_os_get_stats_n(struct rte_eth_dev *dev);
void mlx5_os_stats_init(struct rte_eth_dev *dev);
void mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t *reg_mr_cb,
mlx5_dereg_mr_t *dereg_mr_cb);
+
#endif /* RTE_PMD_MLX5_H_ */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index cd2b0f0..ceb585d 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -8662,10 +8662,11 @@ __flow_dv_translate(struct rte_eth_dev *dev,
break;
case RTE_FLOW_ITEM_TYPE_ECPRI:
if (!mlx5_flex_parser_ecpri_exist(dev)) {
+ /* Create it only the first time to be used. */
ret = mlx5_flex_parser_ecpri_alloc(dev);
if (ret)
return rte_flow_error_set
- (error, ret,
+ (error, -ret,
RTE_FLOW_ERROR_TYPE_ITEM,
NULL,
"cannot create eCPRI parser");
--
2.5.5
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v3 6/7] net/mlx5: add eCPRI flex parser capacity check
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 0/7] add eCPRI support in mlx5 driver Bing Zhao
` (4 preceding siblings ...)
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 5/7] net/mlx5: create and destroy eCPRI flex parser Bing Zhao
@ 2020-07-16 14:23 ` Bing Zhao
2020-07-16 15:05 ` Slava Ovsiienko
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 7/7] doc: update release notes and guides for eCPRI Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver Bing Zhao
7 siblings, 1 reply; 40+ messages in thread
From: Bing Zhao @ 2020-07-16 14:23 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
If the NIC or the FW does not support the dynamic flex parser,
it will return error when trying to create the parser for eCRPI.
Then it is hard to know the detail error reason of the failure.
Before creating the parser node and the following usage of the
parser, the capacity bit saved in the HCA_CAP could be used to
confirm if the dynamic flex parser is supported.
If no, an error will be returned directly with ENOTSUP to prevent
the following steps to be executed.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
v3: fix the wrong member name in the private structure.
---
drivers/net/mlx5/mlx5.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index aec0173..137bb5c 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -636,6 +636,11 @@ mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
uint32_t ids[8];
int ret;
+ if (!priv->config.hca_attr.parse_graph_flex_node) {
+ DRV_LOG(ERR, "Dynamic flex parser is not supported "
+ "for device %s.", priv->dev_data->name);
+ return -ENOTSUP;
+ }
node.header_length_mode = MLX5_GRAPH_NODE_LEN_FIXED;
/* 8 bytes now: 4B common header + 4B message body header. */
node.header_length_base_value = 0x8;
--
2.5.5
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v3 7/7] doc: update release notes and guides for eCPRI
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 0/7] add eCPRI support in mlx5 driver Bing Zhao
` (5 preceding siblings ...)
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 6/7] net/mlx5: add eCPRI flex parser capacity check Bing Zhao
@ 2020-07-16 14:23 ` Bing Zhao
2020-07-16 15:05 ` Slava Ovsiienko
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver Bing Zhao
7 siblings, 1 reply; 40+ messages in thread
From: Bing Zhao @ 2020-07-16 14:23 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
Update the release notes of mlx5 PMD part by adding the
support of eCPRI.
Update the firmware configuration in the mlx5 NIC guide to support
the usage of eCPRI.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
doc/guides/nics/mlx5.rst | 5 +++++
doc/guides/rel_notes/release_20_08.rst | 1 +
2 files changed, 6 insertions(+)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 4b6d8fb..191ec04 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -956,6 +956,11 @@ Below are some firmware configurations listed.
FLEX_PARSER_PROFILE_ENABLE=3
+- enable eCPRI flow matching::
+
+ FLEX_PARSER_PROFILE_ENABLE=4
+ PROG_PARSE_GRAPH=1
+
Prerequisites
-------------
diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index f19b748..6f44ffd 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -122,6 +122,7 @@ New Features
* Added new PMD devarg ``reclaim_mem_mode``.
* Added new devarg ``lacp_by_user``.
+ * Added support for eCPRI protocol offloading.
* **Added vDPA device APIs to query virtio queue statistics.**
--
2.5.5
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v3 1/7] net/mlx5: add flow validation of eCPRI header
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 1/7] net/mlx5: add flow validation of eCPRI header Bing Zhao
@ 2020-07-16 15:04 ` Slava Ovsiienko
0 siblings, 0 replies; 40+ messages in thread
From: Slava Ovsiienko @ 2020-07-16 15:04 UTC (permalink / raw)
To: Bing Zhao, Ori Kam; +Cc: Raslan Darawsheh, Matan Azrad, dev, Netanel Gonen
> -----Original Message-----
> From: Bing Zhao <bingz@mellanox.com>
> Sent: Thursday, July 16, 2020 17:24
> To: Ori Kam <orika@mellanox.com>; Slava Ovsiienko
> <viacheslavo@mellanox.com>
> Cc: Raslan Darawsheh <rasland@mellanox.com>; Matan Azrad
> <matan@mellanox.com>; dev@dpdk.org; Netanel Gonen
> <netanelg@mellanox.com>
> Subject: [PATCH v3 1/7] net/mlx5: add flow validation of eCPRI header
>
> When creating a flow with eCPRI header item, the validation of it is
> mandatory. The detailed limitations are listed below:
> 1. Over Ether / VLAN, ethertype must be 0xAEFE.
> 2. No tunnel support is described in the specification now.
> 3. L3 layer is only supported when L4 is UDP, see #4.
> 4. Over TCP is not supported from the specification, and over UDP
> is not supported right now.
> 5. Concatenation indicator matching is not supported now.
> 6. No need to check the revision.
> 7. Only type field in the common header is mandatory, and one byte
> should be matched integrally.
> 8. Fields in the message payload header are optional.
> 9. Only messages with type #0, #2 and #5 are supported now.
>
> Some limitations are only from software right now, because there is no need
> to support all the message types and variants of protocol stack listed in the
> specification.
>
> Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v3 2/7] net/mlx5: add flow translation of eCPRI header
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 2/7] net/mlx5: add flow translation " Bing Zhao
@ 2020-07-16 15:04 ` Slava Ovsiienko
0 siblings, 0 replies; 40+ messages in thread
From: Slava Ovsiienko @ 2020-07-16 15:04 UTC (permalink / raw)
To: Bing Zhao, Ori Kam; +Cc: Raslan Darawsheh, Matan Azrad, dev, Netanel Gonen
> -----Original Message-----
> From: Bing Zhao <bingz@mellanox.com>
> Sent: Thursday, July 16, 2020 17:24
> To: Ori Kam <orika@mellanox.com>; Slava Ovsiienko
> <viacheslavo@mellanox.com>
> Cc: Raslan Darawsheh <rasland@mellanox.com>; Matan Azrad
> <matan@mellanox.com>; dev@dpdk.org; Netanel Gonen
> <netanelg@mellanox.com>
> Subject: [PATCH v3 2/7] net/mlx5: add flow translation of eCPRI header
>
> In the translation stage, the eCPRI item should be translated into the format
> that lower layer driver could use. All the fields that need to match must be in
> network byte order after translation, as well as the mask. Since the header in
> the item belongs to the network layers stack, and the input parameter of the
> header is considered to be in big-endian format already.
>
> Base on the definition in the PRM, the DW samples will be used for matching
> in the FTE/STE. Now, the type field and only the PC ID, RTC ID, and DLY MSR
> ID of the payload will be supported. The masks should be 00 ff 00 00 ff ff(00)
> 00 00 in the network order. Two DWs are needed to support such matching.
> The mask fields could be zeros to support some wildcard rules. But it makes
> no sense to support the rule matching only on the payload but without
> matching type field.
>
> The DW samples should be stored after the flex parser creation for eCPRI.
> There is no need to query the sample IDs each time when creating a flow rule
> with eCPRI item. It will not introduce insertion rate degradation significantly.
>
> Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v3 3/7] common/mlx5: add flex parser DevX structures
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 3/7] common/mlx5: add flex parser DevX structures Bing Zhao
@ 2020-07-16 15:04 ` Slava Ovsiienko
0 siblings, 0 replies; 40+ messages in thread
From: Slava Ovsiienko @ 2020-07-16 15:04 UTC (permalink / raw)
To: Bing Zhao, Ori Kam; +Cc: Raslan Darawsheh, Matan Azrad, dev, Netanel Gonen
> -----Original Message-----
> From: Bing Zhao <bingz@mellanox.com>
> Sent: Thursday, July 16, 2020 17:24
> To: Ori Kam <orika@mellanox.com>; Slava Ovsiienko
> <viacheslavo@mellanox.com>
> Cc: Raslan Darawsheh <rasland@mellanox.com>; Matan Azrad
> <matan@mellanox.com>; dev@dpdk.org; Netanel Gonen
> <netanelg@mellanox.com>
> Subject: [PATCH v3 3/7] common/mlx5: add flex parser DevX structures
>
> The structures and other definitions will be used for the dynamic flex parser
> creation via Devx command interface. These structures will be used as some
> some intermediate variables and input parameters for the parser creation
> API.
> It is better to keep all members consistent with the PRM definition even
> though some of them will not be used.
>
> Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v3 4/7] common/mlx5: adding DevX command for flex parsers
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 4/7] common/mlx5: adding DevX command for flex parsers Bing Zhao
@ 2020-07-16 15:05 ` Slava Ovsiienko
0 siblings, 0 replies; 40+ messages in thread
From: Slava Ovsiienko @ 2020-07-16 15:05 UTC (permalink / raw)
To: Bing Zhao, Ori Kam; +Cc: Raslan Darawsheh, Matan Azrad, dev, Netanel Gonen
> -----Original Message-----
> From: Bing Zhao <bingz@mellanox.com>
> Sent: Thursday, July 16, 2020 17:24
> To: Ori Kam <orika@mellanox.com>; Slava Ovsiienko
> <viacheslavo@mellanox.com>
> Cc: Raslan Darawsheh <rasland@mellanox.com>; Matan Azrad
> <matan@mellanox.com>; dev@dpdk.org; Netanel Gonen
> <netanelg@mellanox.com>
> Subject: [PATCH v3 4/7] common/mlx5: adding DevX command for flex
> parsers
>
> In order to use dynamic flex parser to parse protocols that is not supported
> natively, two steps are needed.
>
> Firstly, creating the parse graph node. There are three parts of the flex
> parser: node, arc and sample. Node is the whole structure of a flex parser,
> when creating, the length of the protocol should be specified. Then the input
> arc(s) is(are) mandatory, it will tell the HW when to use this parser to parse
> the packet. For a single parser node, up to 8 input arcs could be supported
> and it gives SW ability to support this protocol over multiple layers. The
> output arc is optional and also up to 8 arcs could be supported. If the
> protocol is the last header of the stack, then output arc should be NULL. Or
> else it should be specified. The protocol type in the arc is used to indicate the
> parser pointing to or from this flex parser node. For output arc, the next
> header type field offset and size should be set in the node structure, then the
> HW could get the proper type of the next header and decide which parser to
> point to.
> Note: the parsers have two types now, native parser and flex parser.
> The arc between two flex parsers are not supported in this stage.
>
> Secondly, querying the sample IDs. If the protocol header parsed with flex
> parser needs to used in flow rule offloading, the DW samples are needed
> when creating the parse graph node. The offset of bytes starting from the
> header needs to be set. After creating the node successfully, a general object
> handle will be returned.
> This object could be queryed with Devx command to get the sample IDs.
> When creating a flow, sample IDs could be used to sample a DW from the
> parsed header - 4 continuous bytes starting from the offset. The flow entry
> could specify some mask to use part of this DW for matching. Up to 8
> samples could be supported for a single parse graph node. The offset should
> not exceed the header length.
>
> The HW resources have some limitation, low layer driver error should be
> checked once there is a failure of creating parse graph node.
>
> Signed-off-by: Netanel Gonen <netanelg@mellanox.com>
> Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v3 5/7] net/mlx5: create and destroy eCPRI flex parser
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 5/7] net/mlx5: create and destroy eCPRI flex parser Bing Zhao
@ 2020-07-16 15:05 ` Slava Ovsiienko
0 siblings, 0 replies; 40+ messages in thread
From: Slava Ovsiienko @ 2020-07-16 15:05 UTC (permalink / raw)
To: Bing Zhao, Ori Kam; +Cc: Raslan Darawsheh, Matan Azrad, dev, Netanel Gonen
> -----Original Message-----
> From: Bing Zhao <bingz@mellanox.com>
> Sent: Thursday, July 16, 2020 17:24
> To: Ori Kam <orika@mellanox.com>; Slava Ovsiienko
> <viacheslavo@mellanox.com>
> Cc: Raslan Darawsheh <rasland@mellanox.com>; Matan Azrad
> <matan@mellanox.com>; dev@dpdk.org; Netanel Gonen
> <netanelg@mellanox.com>
> Subject: [PATCH v3 5/7] net/mlx5: create and destroy eCPRI flex parser
>
> eCPRI protocol has unified format layout for the variants, over ETH layer
> (including .1Q) and UDP layer.
>
> The common header of the message has 4 bytes fixed length, and the
> message payload layers are different based on the type field. Now only type
> #0, #2 and #5 will be supported, and 2 bytes are needed.
>
> When creating the flex parser, the header will be extended to 8 bytes and 2
> DW samples are needed. The 1st DW starts from offset 0 and will be used for
> the type field of the common header. The 2nd DW starts from offset 4 and
> will be used for the physical channel ID, real-time control ID or measurement
> ID fields.
>
> The parser will be created once a flow with eCPRI item is observed for the
> first time. After creating, it will remain in the system and HW until the device
> is stopped. Right now, there is no need to destroy the eCPRI flex parser after
> the last flow with eCPRI item is destroyed. This is to get rid of the alternate
> states of creating and destroying eCPRI flex parser with a single eCPRI flow.
>
> Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v3 6/7] net/mlx5: add eCPRI flex parser capacity check
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 6/7] net/mlx5: add eCPRI flex parser capacity check Bing Zhao
@ 2020-07-16 15:05 ` Slava Ovsiienko
0 siblings, 0 replies; 40+ messages in thread
From: Slava Ovsiienko @ 2020-07-16 15:05 UTC (permalink / raw)
To: Bing Zhao, Ori Kam; +Cc: Raslan Darawsheh, Matan Azrad, dev, Netanel Gonen
> -----Original Message-----
> From: Bing Zhao <bingz@mellanox.com>
> Sent: Thursday, July 16, 2020 17:24
> To: Ori Kam <orika@mellanox.com>; Slava Ovsiienko
> <viacheslavo@mellanox.com>
> Cc: Raslan Darawsheh <rasland@mellanox.com>; Matan Azrad
> <matan@mellanox.com>; dev@dpdk.org; Netanel Gonen
> <netanelg@mellanox.com>
> Subject: [PATCH v3 6/7] net/mlx5: add eCPRI flex parser capacity check
>
> If the NIC or the FW does not support the dynamic flex parser, it will return
> error when trying to create the parser for eCRPI.
> Then it is hard to know the detail error reason of the failure.
> Before creating the parser node and the following usage of the parser, the
> capacity bit saved in the HCA_CAP could be used to confirm if the dynamic
> flex parser is supported.
> If no, an error will be returned directly with ENOTSUP to prevent the
> following steps to be executed.
>
> Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v3 7/7] doc: update release notes and guides for eCPRI
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 7/7] doc: update release notes and guides for eCPRI Bing Zhao
@ 2020-07-16 15:05 ` Slava Ovsiienko
0 siblings, 0 replies; 40+ messages in thread
From: Slava Ovsiienko @ 2020-07-16 15:05 UTC (permalink / raw)
To: Bing Zhao, Ori Kam; +Cc: Raslan Darawsheh, Matan Azrad, dev, Netanel Gonen
> -----Original Message-----
> From: Bing Zhao <bingz@mellanox.com>
> Sent: Thursday, July 16, 2020 17:24
> To: Ori Kam <orika@mellanox.com>; Slava Ovsiienko
> <viacheslavo@mellanox.com>
> Cc: Raslan Darawsheh <rasland@mellanox.com>; Matan Azrad
> <matan@mellanox.com>; dev@dpdk.org; Netanel Gonen
> <netanelg@mellanox.com>
> Subject: [PATCH v3 7/7] doc: update release notes and guides for eCPRI
>
> Update the release notes of mlx5 PMD part by adding the support of eCPRI.
> Update the firmware configuration in the mlx5 NIC guide to support the
> usage of eCPRI.
>
> Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 0/7] add eCPRI support in mlx5 driver Bing Zhao
` (6 preceding siblings ...)
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 7/7] doc: update release notes and guides for eCPRI Bing Zhao
@ 2020-07-17 7:11 ` Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 1/7] net/mlx5: add flow validation of eCPRI header Bing Zhao
` (7 more replies)
7 siblings, 8 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-17 7:11 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
This patch set is to add the eCPRI support of flow rules in mlx5 PMD
driver. Right now, only eCPRI over Ethernet layer (including VLAN)
is supported. eCPRI over UDP will be supported in the future. If the
flow rule to be inserted is not supported, PMD driver will return
error to indicate the reason of the failure.
v2: listed as below
1. added document updates
2. add NIC / FW capacity check
3. fix mask of type in common header check and code cleanup
v3: fix the wrong member name in the private structure
v4: rebased and resolve conflict
Bing Zhao (7):
net/mlx5: add flow validation of eCPRI header
net/mlx5: add flow translation of eCPRI header
common/mlx5: add flex parser DevX structures
common/mlx5: adding DevX command for flex parsers
net/mlx5: create and destroy eCPRI flex parser
net/mlx5: add eCPRI flex parser capacity check
doc: update release notes and guides for eCPRI
doc/guides/nics/mlx5.rst | 5 +
doc/guides/rel_notes/release_20_08.rst | 1 +
drivers/common/mlx5/mlx5_devx_cmds.c | 170 +++++++++++++++++++++++-
drivers/common/mlx5/mlx5_devx_cmds.h | 52 ++++++++
drivers/common/mlx5/mlx5_prm.h | 115 +++++++++++++++-
drivers/common/mlx5/rte_common_mlx5_version.map | 2 +
drivers/net/mlx5/mlx5.c | 107 +++++++++++++++
drivers/net/mlx5/mlx5.h | 19 +++
drivers/net/mlx5/mlx5_flow.c | 107 ++++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 9 ++
drivers/net/mlx5/mlx5_flow_dv.c | 126 ++++++++++++++++++
11 files changed, 704 insertions(+), 9 deletions(-)
--
1.8.3.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v4 1/7] net/mlx5: add flow validation of eCPRI header
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver Bing Zhao
@ 2020-07-17 7:11 ` Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 2/7] net/mlx5: add flow translation " Bing Zhao
` (6 subsequent siblings)
7 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-17 7:11 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
When creating a flow with eCPRI header item, the validation of it is
mandatory. The detailed limitations are listed below:
1. Over Ether / VLAN, ethertype must be 0xAEFE.
2. No tunnel support is described in the specification now.
3. L3 layer is only supported when L4 is UDP, see #4.
4. Over TCP is not supported from the specification, and over UDP
is not supported right now.
5. Concatenation indicator matching is not supported now.
6. No need to check the revision.
7. Only type field in the common header is mandatory, and one byte
should be matched integrally.
8. Fields in the message payload header are optional.
9. Only messages with type #0, #2 and #5 are supported now.
Some limitations are only from software right now, because there is
no need to support all the message types and variants of protocol
stack listed in the specification.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
drivers/net/mlx5/mlx5_flow.c | 107 +++++++++++++++++++++++++++++++++++++++-
drivers/net/mlx5/mlx5_flow.h | 9 ++++
drivers/net/mlx5/mlx5_flow_dv.c | 23 +++++++++
3 files changed, 138 insertions(+), 1 deletion(-)
diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c
index ae5ccc2..12d80b5 100644
--- a/drivers/net/mlx5/mlx5_flow.c
+++ b/drivers/net/mlx5/mlx5_flow.c
@@ -1227,11 +1227,17 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
RTE_FLOW_ERROR_TYPE_ATTR_EGRESS, NULL,
"rss action not supported for "
"egress");
- if (rss->level > 1 && !tunnel)
+ if (rss->level > 1 && !tunnel)
return rte_flow_error_set(error, EINVAL,
RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
"inner RSS is not supported for "
"non-tunnel flows");
+ if ((item_flags & MLX5_FLOW_LAYER_ECPRI) &&
+ !(item_flags & MLX5_FLOW_LAYER_INNER_L4_UDP)) {
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ACTION_CONF, NULL,
+ "RSS on eCPRI is not supported now");
+ }
return 0;
}
@@ -1597,6 +1603,10 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
* Item specification.
* @param[in] item_flags
* Bit-fields that holds the items detected until now.
+ * @param[in] last_item
+ * Previous validated item in the pattern items.
+ * @param[in] ether_type
+ * Type in the ethernet layer header (including dot1q).
* @param[in] acc_mask
* Acceptable mask, if NULL default internal default mask
* will be used to check whether item fields are supported.
@@ -1695,6 +1705,10 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
* Item specification.
* @param[in] item_flags
* Bit-fields that holds the items detected until now.
+ * @param[in] last_item
+ * Previous validated item in the pattern items.
+ * @param[in] ether_type
+ * Type in the ethernet layer header (including dot1q).
* @param[in] acc_mask
* Acceptable mask, if NULL default internal default mask
* will be used to check whether item fields are supported.
@@ -2357,6 +2371,97 @@ uint32_t mlx5_flow_adjust_priority(struct rte_eth_dev *dev, int32_t priority,
return 0;
}
+/**
+ * Validate eCPRI item.
+ *
+ * @param[in] item
+ * Item specification.
+ * @param[in] item_flags
+ * Bit-fields that holds the items detected until now.
+ * @param[in] last_item
+ * Previous validated item in the pattern items.
+ * @param[in] ether_type
+ * Type in the ethernet layer header (including dot1q).
+ * @param[in] acc_mask
+ * Acceptable mask, if NULL default internal default mask
+ * will be used to check whether item fields are supported.
+ * @param[out] error
+ * Pointer to error structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_flow_validate_item_ecpri(const struct rte_flow_item *item,
+ uint64_t item_flags,
+ uint64_t last_item,
+ uint16_t ether_type,
+ const struct rte_flow_item_ecpri *acc_mask,
+ struct rte_flow_error *error)
+{
+ const struct rte_flow_item_ecpri *mask = item->mask;
+ const struct rte_flow_item_ecpri nic_mask = {
+ .hdr = {
+ .common = {
+ .u32 =
+ RTE_BE32(((const struct rte_ecpri_common_hdr) {
+ .type = 0xFF,
+ }).u32),
+ },
+ .dummy[0] = 0xFFFFFFFF,
+ },
+ };
+ const uint64_t outer_l2_vlan = (MLX5_FLOW_LAYER_OUTER_L2 |
+ MLX5_FLOW_LAYER_OUTER_VLAN);
+ struct rte_flow_item_ecpri mask_lo;
+
+ if ((last_item & outer_l2_vlan) && ether_type &&
+ ether_type != RTE_ETHER_TYPE_ECPRI)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "eCPRI cannot follow L2/VLAN layer "
+ "which ether type is not 0xAEFE.");
+ if (item_flags & MLX5_FLOW_LAYER_TUNNEL)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "eCPRI with tunnel is not supported "
+ "right now.");
+ if (item_flags & MLX5_FLOW_LAYER_OUTER_L3)
+ return rte_flow_error_set(error, ENOTSUP,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "multiple L3 layers not supported");
+ else if (item_flags & MLX5_FLOW_LAYER_OUTER_L4_TCP)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "eCPRI cannot follow a TCP layer.");
+ /* In specification, eCPRI could be over UDP layer. */
+ else if (item_flags & MLX5_FLOW_LAYER_OUTER_L4_UDP)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM, item,
+ "eCPRI over UDP layer is not yet "
+ "supported right now.");
+ /* Mask for type field in common header could be zero. */
+ if (!mask)
+ mask = &rte_flow_item_ecpri_mask;
+ mask_lo.hdr.common.u32 = rte_be_to_cpu_32(mask->hdr.common.u32);
+ /* Input mask is in big-endian format. */
+ if (mask_lo.hdr.common.type != 0 && mask_lo.hdr.common.type != 0xff)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
+ "partial mask is not supported "
+ "for protocol");
+ else if (mask_lo.hdr.common.type == 0 && mask->hdr.dummy[0] != 0)
+ return rte_flow_error_set(error, EINVAL,
+ RTE_FLOW_ERROR_TYPE_ITEM_MASK, mask,
+ "message header mask must be after "
+ "a type mask");
+ return mlx5_flow_item_acceptable(item, (const uint8_t *)mask,
+ acc_mask ? (const uint8_t *)acc_mask
+ : (const uint8_t *)&nic_mask,
+ sizeof(struct rte_flow_item_ecpri),
+ error);
+}
+
/* Allocate unique ID for the split Q/RSS subflows. */
static uint32_t
flow_qrss_get_id(struct rte_eth_dev *dev)
diff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h
index 43cbda8..6dfeef3 100644
--- a/drivers/net/mlx5/mlx5_flow.h
+++ b/drivers/net/mlx5/mlx5_flow.h
@@ -128,6 +128,9 @@ enum mlx5_feature_name {
/* Pattern tunnel Layer bits (continued). */
#define MLX5_FLOW_LAYER_GTP (1u << 28)
+/* Pattern eCPRI Layer bit. */
+#define MLX5_FLOW_LAYER_ECPRI (UINT64_C(1) << 29)
+
/* Outer Masks. */
#define MLX5_FLOW_LAYER_OUTER_L3 \
(MLX5_FLOW_LAYER_OUTER_L3_IPV4 | MLX5_FLOW_LAYER_OUTER_L3_IPV6)
@@ -1027,6 +1030,12 @@ int mlx5_flow_validate_item_geneve(const struct rte_flow_item *item,
uint64_t item_flags,
struct rte_eth_dev *dev,
struct rte_flow_error *error);
+int mlx5_flow_validate_item_ecpri(const struct rte_flow_item *item,
+ uint64_t item_flags,
+ uint64_t last_item,
+ uint16_t ether_type,
+ const struct rte_flow_item_ecpri *acc_mask,
+ struct rte_flow_error *error);
struct mlx5_meter_domains_infos *mlx5_flow_create_mtr_tbls
(struct rte_eth_dev *dev,
const struct mlx5_flow_meter *fm);
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index 8b5b683..f042a42 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -4923,6 +4923,17 @@ struct field_modify_info modify_tcp[] = {
.hop_limits = 0xff,
},
};
+ const struct rte_flow_item_ecpri nic_ecpri_mask = {
+ .hdr = {
+ .common = {
+ .u32 =
+ RTE_BE32(((const struct rte_ecpri_common_hdr) {
+ .type = 0xFF,
+ }).u32),
+ },
+ .dummy[0] = 0xffffffff,
+ },
+ };
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_dev_config *dev_conf = &priv->config;
uint16_t queue_index = 0xFFFF;
@@ -5173,6 +5184,17 @@ struct field_modify_info modify_tcp[] = {
return ret;
last_item = MLX5_FLOW_LAYER_GTP;
break;
+ case RTE_FLOW_ITEM_TYPE_ECPRI:
+ /* Capacity will be checked in the translate stage. */
+ ret = mlx5_flow_validate_item_ecpri(items, item_flags,
+ last_item,
+ ether_type,
+ &nic_ecpri_mask,
+ error);
+ if (ret < 0)
+ return ret;
+ last_item = MLX5_FLOW_LAYER_ECPRI;
+ break;
default:
return rte_flow_error_set(error, ENOTSUP,
RTE_FLOW_ERROR_TYPE_ITEM,
@@ -5882,6 +5904,7 @@ struct field_modify_info modify_tcp[] = {
* Set match on ethertype only if ETH header is not followed by VLAN.
* HW is optimized for IPv4/IPv6. In such cases, avoid setting
* ethertype, and use ip_version field instead.
+ * eCPRI over Ether layer will use type value 0xAEFE.
*/
if (eth_v->type == RTE_BE16(RTE_ETHER_TYPE_IPV4) &&
eth_m->type == 0xFFFF) {
--
1.8.3.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v4 2/7] net/mlx5: add flow translation of eCPRI header
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 1/7] net/mlx5: add flow validation of eCPRI header Bing Zhao
@ 2020-07-17 7:11 ` Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 3/7] common/mlx5: add flex parser DevX structures Bing Zhao
` (5 subsequent siblings)
7 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-17 7:11 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
In the translation stage, the eCPRI item should be translated into
the format that lower layer driver could use. All the fields that
need to match must be in network byte order after translation, as
well as the mask. Since the header in the item belongs to the network
layers stack, and the input parameter of the header is considered to
be in big-endian format already.
Base on the definition in the PRM, the DW samples will be used for
matching in the FTE/STE. Now, the type field and only the PC ID, RTC
ID, and DLY MSR ID of the payload will be supported. The masks should
be 00 ff 00 00 ff ff(00) 00 00 in the network order. Two DWs are
needed to support such matching. The mask fields could be zeros to
support some wildcard rules. But it makes no sense to support the
rule matching only on the payload but without matching type field.
The DW samples should be stored after the flex parser creation for
eCPRI. There is no need to query the sample IDs each time when
creating a flow rule with eCPRI item. It will not introduce
insertion rate degradation significantly.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
v2: fix the endianess issue of type mask field checking.
---
drivers/common/mlx5/mlx5_prm.h | 16 ++++++-
drivers/net/mlx5/mlx5.c | 40 ++++++++++++++++
drivers/net/mlx5/mlx5.h | 18 +++++++
drivers/net/mlx5/mlx5_flow_dv.c | 102 ++++++++++++++++++++++++++++++++++++++++
4 files changed, 175 insertions(+), 1 deletion(-)
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index b37be30..4c06c37 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -741,6 +741,18 @@ struct mlx5_ifc_fte_match_set_misc3_bits {
u8 reserved_at_170[0x90];
};
+struct mlx5_ifc_fte_match_set_misc4_bits {
+ u8 prog_sample_field_value_0[0x20];
+ u8 prog_sample_field_id_0[0x20];
+ u8 prog_sample_field_value_1[0x20];
+ u8 prog_sample_field_id_1[0x20];
+ u8 prog_sample_field_value_2[0x20];
+ u8 prog_sample_field_id_2[0x20];
+ u8 prog_sample_field_value_3[0x20];
+ u8 prog_sample_field_id_3[0x20];
+ u8 reserved_at_100[0x100];
+};
+
/* Flow matcher. */
struct mlx5_ifc_fte_match_param_bits {
struct mlx5_ifc_fte_match_set_lyr_2_4_bits outer_headers;
@@ -748,6 +760,7 @@ struct mlx5_ifc_fte_match_param_bits {
struct mlx5_ifc_fte_match_set_lyr_2_4_bits inner_headers;
struct mlx5_ifc_fte_match_set_misc2_bits misc_parameters_2;
struct mlx5_ifc_fte_match_set_misc3_bits misc_parameters_3;
+ struct mlx5_ifc_fte_match_set_misc4_bits misc_parameters_4;
};
enum {
@@ -755,7 +768,8 @@ enum {
MLX5_MATCH_CRITERIA_ENABLE_MISC_BIT,
MLX5_MATCH_CRITERIA_ENABLE_INNER_BIT,
MLX5_MATCH_CRITERIA_ENABLE_MISC2_BIT,
- MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT
+ MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT,
+ MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT,
};
enum {
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 10196ac..1ba5e0c 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -608,6 +608,46 @@ struct mlx5_flow_id_pool *
mlx5_ipool_destroy(sh->ipool[i]);
}
+/*
+ * Check if dynamic flex parser for eCPRI already exists.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ *
+ * @return
+ * true on exists, false on not.
+ */
+bool
+mlx5_flex_parser_ecpri_exist(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_flex_parser_profiles *prf =
+ &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
+
+ return !!prf->obj;
+}
+
+/*
+ * Allocation of a flex parser for eCPRI. Once created, this parser related
+ * resources will be held until the device is closed.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ *
+ * @return
+ * 0 on success, a negative errno value otherwise and rte_errno is set.
+ */
+int
+mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_flex_parser_profiles *prf =
+ &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
+
+ (void)prf;
+ return 0;
+}
+
/**
* Allocate shared device context. If there is multiport device the
* master and representors will share this context, if there is single
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 1aa3a3e..9b29e3c 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -590,6 +590,20 @@ struct mlx5_dev_txpp {
rte_atomic32_t err_ts_future; /* Timestamp in the distant future. */
};
+/* Supported flex parser profile ID. */
+enum mlx5_flex_parser_profile_id {
+ MLX5_FLEX_PARSER_ECPRI_0 = 0,
+ MLX5_FLEX_PARSER_MAX = 8,
+};
+
+/* Sample ID information of flex parser structure. */
+struct mlx5_flex_parser_profiles {
+ uint32_t num; /* Actual number of samples. */
+ uint32_t ids[8]; /* Sample IDs for this profile. */
+ uint8_t offset[8]; /* Bytes offset of each parser. */
+ void *obj; /* Flex parser node object. */
+};
+
/*
* Shared Infiniband device context for Master/Representors
* which belong to same IB device with multiple IB ports.
@@ -649,6 +663,8 @@ struct mlx5_dev_ctx_shared {
struct mlx5_devx_obj *td; /* Transport domain. */
struct mlx5_flow_id_pool *flow_id_pool; /* Flow ID pool. */
struct mlx5dv_devx_uar *tx_uar; /* Tx/packer pacing shared UAR. */
+ struct mlx5_flex_parser_profiles fp[MLX5_FLEX_PARSER_MAX];
+ /* Flex parser profiles information. */
struct mlx5_dev_shared_port port[]; /* per device port data array. */
};
@@ -784,6 +800,8 @@ int mlx5_dev_check_sibling_config(struct mlx5_priv *priv,
int mlx5_dev_set_mtu(struct rte_eth_dev *dev, uint16_t mtu);
int mlx5_hairpin_cap_get(struct rte_eth_dev *dev,
struct rte_eth_hairpin_cap *cap);
+bool mlx5_flex_parser_ecpri_exist(struct rte_eth_dev *dev);
+int mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev);
/* mlx5_ethdev.c */
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index f042a42..cd2b0f0 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -7259,6 +7259,90 @@ struct field_modify_info modify_tcp[] = {
rte_be_to_cpu_32(gtp_v->teid & gtp_m->teid));
}
+/**
+ * Add eCPRI item to matcher and to the value.
+ *
+ * @param[in] dev
+ * The devich to configure through.
+ * @param[in, out] matcher
+ * Flow matcher.
+ * @param[in, out] key
+ * Flow matcher value.
+ * @param[in] item
+ * Flow pattern to translate.
+ * @param[in] samples
+ * Sample IDs to be used in the matching.
+ */
+static void
+flow_dv_translate_item_ecpri(struct rte_eth_dev *dev, void *matcher,
+ void *key, const struct rte_flow_item *item)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ const struct rte_flow_item_ecpri *ecpri_m = item->mask;
+ const struct rte_flow_item_ecpri *ecpri_v = item->spec;
+ void *misc4_m = MLX5_ADDR_OF(fte_match_param, matcher,
+ misc_parameters_4);
+ void *misc4_v = MLX5_ADDR_OF(fte_match_param, key, misc_parameters_4);
+ uint32_t *samples;
+ void *dw_m;
+ void *dw_v;
+
+ if (!ecpri_v)
+ return;
+ if (!ecpri_m)
+ ecpri_m = &rte_flow_item_ecpri_mask;
+ /*
+ * Maximal four DW samples are supported in a single matching now.
+ * Two are used now for a eCPRI matching:
+ * 1. Type: one byte, mask should be 0x00ff0000 in network order
+ * 2. ID of a message: one or two bytes, mask 0xffff0000 or 0xff000000
+ * if any.
+ */
+ if (!ecpri_m->hdr.common.u32)
+ return;
+ samples = priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0].ids;
+ /* Need to take the whole DW as the mask to fill the entry. */
+ dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m,
+ prog_sample_field_value_0);
+ dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v,
+ prog_sample_field_value_0);
+ /* Already big endian (network order) in the header. */
+ *(uint32_t *)dw_m = ecpri_m->hdr.common.u32;
+ *(uint32_t *)dw_v = ecpri_v->hdr.common.u32;
+ /* Sample#0, used for matching type, offset 0. */
+ MLX5_SET(fte_match_set_misc4, misc4_m,
+ prog_sample_field_id_0, samples[0]);
+ /* It makes no sense to set the sample ID in the mask field. */
+ MLX5_SET(fte_match_set_misc4, misc4_v,
+ prog_sample_field_id_0, samples[0]);
+ /*
+ * Checking if message body part needs to be matched.
+ * Some wildcard rules only matching type field should be supported.
+ */
+ if (ecpri_m->hdr.dummy[0]) {
+ switch (ecpri_v->hdr.common.type) {
+ case RTE_ECPRI_MSG_TYPE_IQ_DATA:
+ case RTE_ECPRI_MSG_TYPE_RTC_CTRL:
+ case RTE_ECPRI_MSG_TYPE_DLY_MSR:
+ dw_m = MLX5_ADDR_OF(fte_match_set_misc4, misc4_m,
+ prog_sample_field_value_1);
+ dw_v = MLX5_ADDR_OF(fte_match_set_misc4, misc4_v,
+ prog_sample_field_value_1);
+ *(uint32_t *)dw_m = ecpri_m->hdr.dummy[0];
+ *(uint32_t *)dw_v = ecpri_v->hdr.dummy[0];
+ /* Sample#1, to match message body, offset 4. */
+ MLX5_SET(fte_match_set_misc4, misc4_m,
+ prog_sample_field_id_1, samples[1]);
+ MLX5_SET(fte_match_set_misc4, misc4_v,
+ prog_sample_field_id_1, samples[1]);
+ break;
+ default:
+ /* Others, do not match any sample ID. */
+ break;
+ }
+ }
+}
+
static uint32_t matcher_zero[MLX5_ST_SZ_DW(fte_match_param)] = { 0 };
#define HEADER_IS_ZERO(match_criteria, headers) \
@@ -7294,6 +7378,9 @@ struct field_modify_info modify_tcp[] = {
match_criteria_enable |=
(!HEADER_IS_ZERO(match_criteria, misc_parameters_3)) <<
MLX5_MATCH_CRITERIA_ENABLE_MISC3_BIT;
+ match_criteria_enable |=
+ (!HEADER_IS_ZERO(match_criteria, misc_parameters_4)) <<
+ MLX5_MATCH_CRITERIA_ENABLE_MISC4_BIT;
return match_criteria_enable;
}
@@ -8573,6 +8660,21 @@ struct field_modify_info modify_tcp[] = {
MLX5_PRIORITY_MAP_L2 : MLX5_PRIORITY_MAP_L4;
last_item = MLX5_FLOW_LAYER_GTP;
break;
+ case RTE_FLOW_ITEM_TYPE_ECPRI:
+ if (!mlx5_flex_parser_ecpri_exist(dev)) {
+ ret = mlx5_flex_parser_ecpri_alloc(dev);
+ if (ret)
+ return rte_flow_error_set
+ (error, ret,
+ RTE_FLOW_ERROR_TYPE_ITEM,
+ NULL,
+ "cannot create eCPRI parser");
+ }
+ flow_dv_translate_item_ecpri(dev, match_mask,
+ match_value, items);
+ /* No other protocol should follow eCPRI layer. */
+ last_item = MLX5_FLOW_LAYER_ECPRI;
+ break;
default:
break;
}
--
1.8.3.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v4 3/7] common/mlx5: add flex parser DevX structures
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 1/7] net/mlx5: add flow validation of eCPRI header Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 2/7] net/mlx5: add flow translation " Bing Zhao
@ 2020-07-17 7:11 ` Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 4/7] common/mlx5: adding DevX command for flex parsers Bing Zhao
` (4 subsequent siblings)
7 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-17 7:11 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
The structures and other definitions will be used for the dynamic
flex parser creation via Devx command interface. These structures
will be used as some some intermediate variables and input
parameters for the parser creation API.
It is better to keep all members consistent with the PRM definition
even though some of them will not be used.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
drivers/common/mlx5/mlx5_devx_cmds.h | 44 ++++++++++++++++++++++++++++++++++++
drivers/common/mlx5/mlx5_prm.h | 30 ++++++++++++++++++++++++
drivers/net/mlx5/mlx5.h | 2 +-
3 files changed, 75 insertions(+), 1 deletion(-)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index 34482e1..4811a1a 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -309,6 +309,50 @@ struct mlx5_devx_virtio_q_couners_attr {
uint32_t invalid_buffer;
};
+/*
+ * graph flow match sample attributes structure,
+ * used by flex parser operations.
+ */
+struct mlx5_devx_match_sample_attr {
+ uint32_t flow_match_sample_en:1;
+ uint32_t flow_match_sample_field_offset:16;
+ uint32_t flow_match_sample_offset_mode:4;
+ uint32_t flow_match_sample_field_offset_mask;
+ uint32_t flow_match_sample_field_offset_shift:4;
+ uint32_t flow_match_sample_field_base_offset:8;
+ uint32_t flow_match_sample_tunnel_mode:3;
+ uint32_t flow_match_sample_field_id;
+};
+
+/* graph node arc attributes structure, used by flex parser operations. */
+struct mlx5_devx_graph_arc_attr {
+ uint32_t compare_condition_value:16;
+ uint32_t start_inner_tunnel:1;
+ uint32_t arc_parse_graph_node:8;
+ uint32_t parse_graph_node_handle;
+};
+
+/* Maximal number of samples per graph node. */
+#define MLX5_GRAPH_NODE_SAMPLE_NUM 8
+
+/* Maximal number of input/output arcs per graph node. */
+#define MLX5_GRAPH_NODE_ARC_NUM 8
+
+/* parse graph node attributes structure, used by flex parser operations. */
+struct mlx5_devx_graph_node_attr {
+ uint32_t modify_field_select;
+ uint32_t header_length_mode:4;
+ uint32_t header_length_base_value:16;
+ uint32_t header_length_field_shift:4;
+ uint32_t header_length_field_offset:16;
+ uint32_t header_length_field_mask;
+ struct mlx5_devx_match_sample_attr sample[MLX5_GRAPH_NODE_SAMPLE_NUM];
+ uint32_t next_header_field_offset:16;
+ uint32_t next_header_field_size:5;
+ struct mlx5_devx_graph_arc_attr in[MLX5_GRAPH_NODE_ARC_NUM];
+ struct mlx5_devx_graph_arc_attr out[MLX5_GRAPH_NODE_ARC_NUM];
+};
+
/* mlx5_devx_cmds.c */
__rte_internal
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 4c06c37..81ff3d0 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -2650,6 +2650,36 @@ enum {
/* The bits meter color use. */
#define MLX5_MTR_COLOR_BITS 8
+/* Length mode of dynamic flex parser graph node. */
+enum mlx5_parse_graph_node_len_mode {
+ MLX5_GRAPH_NODE_LEN_FIXED = 0x0,
+ MLX5_GRAPH_NODE_LEN_FIELD = 0x1,
+ MLX5_GRAPH_NODE_LEN_BITMASK = 0x2,
+};
+
+/* Offset mode of the samples of flex parser. */
+enum mlx5_parse_graph_flow_match_sample_offset_mode {
+ MLX5_GRAPH_SAMPLE_OFFSET_FIXED = 0x0,
+ MLX5_GRAPH_SAMPLE_OFFSET_FIELD = 0x1,
+ MLX5_GRAPH_SAMPLE_OFFSET_BITMASK = 0x2,
+};
+
+/* Node index for an input / output arc of the flex parser graph. */
+enum mlx5_parse_graph_arc_node_index {
+ MLX5_GRAPH_ARC_NODE_NULL = 0x0,
+ MLX5_GRAPH_ARC_NODE_HEAD = 0x1,
+ MLX5_GRAPH_ARC_NODE_MAC = 0x2,
+ MLX5_GRAPH_ARC_NODE_IP = 0x3,
+ MLX5_GRAPH_ARC_NODE_GRE = 0x4,
+ MLX5_GRAPH_ARC_NODE_UDP = 0x5,
+ MLX5_GRAPH_ARC_NODE_MPLS = 0x6,
+ MLX5_GRAPH_ARC_NODE_TCP = 0x7,
+ MLX5_GRAPH_ARC_NODE_VXLAN_GPE = 0x8,
+ MLX5_GRAPH_ARC_NODE_GENEVE = 0x9,
+ MLX5_GRAPH_ARC_NODE_IPSEC_ESP = 0xa,
+ MLX5_GRAPH_ARC_NODE_PROGRAMMABLE = 0x1f,
+};
+
/**
* Convert a user mark to flow mark.
*
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 9b29e3c..61abe4a 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -600,7 +600,7 @@ enum mlx5_flex_parser_profile_id {
struct mlx5_flex_parser_profiles {
uint32_t num; /* Actual number of samples. */
uint32_t ids[8]; /* Sample IDs for this profile. */
- uint8_t offset[8]; /* Bytes offset of each parser. */
+ uint8_t offset[8]; /* Bytes offset of each parser. */
void *obj; /* Flex parser node object. */
};
--
1.8.3.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v4 4/7] common/mlx5: adding DevX command for flex parsers
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver Bing Zhao
` (2 preceding siblings ...)
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 3/7] common/mlx5: add flex parser DevX structures Bing Zhao
@ 2020-07-17 7:11 ` Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 5/7] net/mlx5: create and destroy eCPRI flex parser Bing Zhao
` (3 subsequent siblings)
7 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-17 7:11 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
In order to use dynamic flex parser to parse protocols that is not
supported natively, two steps are needed.
Firstly, creating the parse graph node. There are three parts of the
flex parser: node, arc and sample. Node is the whole structure of a
flex parser, when creating, the length of the protocol should be
specified. Then the input arc(s) is(are) mandatory, it will tell the
HW when to use this parser to parse the packet. For a single parser
node, up to 8 input arcs could be supported and it gives SW ability
to support this protocol over multiple layers. The output arc is
optional and also up to 8 arcs could be supported. If the protocol
is the last header of the stack, then output arc should be NULL. Or
else it should be specified. The protocol type in the arc is used to
indicate the parser pointing to or from this flex parser node. For
output arc, the next header type field offset and size should be set
in the node structure, then the HW could get the proper type of the
next header and decide which parser to point to.
Note: the parsers have two types now, native parser and flex parser.
The arc between two flex parsers are not supported in this stage.
Secondly, querying the sample IDs. If the protocol header parsed
with flex parser needs to used in flow rule offloading, the DW
samples are needed when creating the parse graph node. The offset
of bytes starting from the header needs to be set. After creating
the node successfully, a general object handle will be returned.
This object could be queryed with Devx command to get the sample
IDs.
When creating a flow, sample IDs could be used to sample a DW from
the parsed header - 4 continuous bytes starting from the offset. The
flow entry could specify some mask to use part of this DW for
matching. Up to 8 samples could be supported for a single parse
graph node. The offset should not exceed the header length.
The HW resources have some limitation, low layer driver error should
be checked once there is a failure of creating parse graph node.
Signed-off-by: Netanel Gonen <netanelg@mellanox.com>
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
drivers/common/mlx5/mlx5_devx_cmds.c | 170 +++++++++++++++++++++++-
drivers/common/mlx5/mlx5_devx_cmds.h | 8 ++
drivers/common/mlx5/mlx5_prm.h | 69 +++++++++-
drivers/common/mlx5/rte_common_mlx5_version.map | 2 +
4 files changed, 242 insertions(+), 7 deletions(-)
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index 13cd76a..0cfa4dc 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -456,6 +456,167 @@ struct mlx5_devx_obj *
}
}
+int
+mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
+ uint32_t ids[], uint32_t num)
+{
+ uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0};
+ uint32_t out[MLX5_ST_SZ_DW(create_flex_parser_out)] = {0};
+ void *hdr = MLX5_ADDR_OF(create_flex_parser_out, in, hdr);
+ void *flex = MLX5_ADDR_OF(create_flex_parser_out, out, flex);
+ void *sample = MLX5_ADDR_OF(parse_graph_flex, flex, sample_table);
+ int ret;
+ uint32_t idx = 0;
+ uint32_t i;
+
+ if (num > MLX5_GRAPH_NODE_SAMPLE_NUM) {
+ rte_errno = EINVAL;
+ DRV_LOG(ERR, "Too many sample IDs to be fetched.");
+ return -rte_errno;
+ }
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+ MLX5_CMD_OP_QUERY_GENERAL_OBJECT);
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+ MLX5_GENERAL_OBJ_TYPE_FLEX_PARSE_GRAPH);
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_id, flex_obj->id);
+ ret = mlx5_glue->devx_obj_query(flex_obj->obj, in, sizeof(in),
+ out, sizeof(out));
+ if (ret) {
+ rte_errno = ret;
+ DRV_LOG(ERR, "Failed to query sample IDs with object %p.",
+ (void *)flex_obj);
+ return -rte_errno;
+ }
+ for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
+ void *s_off = (void *)((char *)sample + i *
+ MLX5_ST_SZ_BYTES(parse_graph_flow_match_sample));
+ uint32_t en;
+
+ en = MLX5_GET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_en);
+ if (!en)
+ continue;
+ ids[idx++] = MLX5_GET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_id);
+ }
+ if (num != idx) {
+ rte_errno = EINVAL;
+ DRV_LOG(ERR, "Number of sample IDs are not as expected.");
+ return -rte_errno;
+ }
+ return ret;
+}
+
+
+struct mlx5_devx_obj *
+mlx5_devx_cmd_create_flex_parser(void *ctx,
+ struct mlx5_devx_graph_node_attr *data)
+{
+ uint32_t in[MLX5_ST_SZ_DW(create_flex_parser_in)] = {0};
+ uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0};
+ void *hdr = MLX5_ADDR_OF(create_flex_parser_in, in, hdr);
+ void *flex = MLX5_ADDR_OF(create_flex_parser_in, in, flex);
+ void *sample = MLX5_ADDR_OF(parse_graph_flex, flex, sample_table);
+ void *in_arc = MLX5_ADDR_OF(parse_graph_flex, flex, input_arc);
+ void *out_arc = MLX5_ADDR_OF(parse_graph_flex, flex, output_arc);
+ struct mlx5_devx_obj *parse_flex_obj = NULL;
+ uint32_t i;
+
+ parse_flex_obj = rte_calloc(__func__, 1, sizeof(*parse_flex_obj), 0);
+ if (!parse_flex_obj) {
+ DRV_LOG(ERR, "Failed to allocate flex parser data");
+ rte_errno = ENOMEM;
+ rte_free(in);
+ return NULL;
+ }
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+ MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
+ MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+ MLX5_GENERAL_OBJ_TYPE_FLEX_PARSE_GRAPH);
+ MLX5_SET(parse_graph_flex, flex, header_length_mode,
+ data->header_length_mode);
+ MLX5_SET(parse_graph_flex, flex, header_length_base_value,
+ data->header_length_base_value);
+ MLX5_SET(parse_graph_flex, flex, header_length_field_offset,
+ data->header_length_field_offset);
+ MLX5_SET(parse_graph_flex, flex, header_length_field_shift,
+ data->header_length_field_shift);
+ MLX5_SET(parse_graph_flex, flex, header_length_field_mask,
+ data->header_length_field_mask);
+ for (i = 0; i < MLX5_GRAPH_NODE_SAMPLE_NUM; i++) {
+ struct mlx5_devx_match_sample_attr *s = &data->sample[i];
+ void *s_off = (void *)((char *)sample + i *
+ MLX5_ST_SZ_BYTES(parse_graph_flow_match_sample));
+
+ if (!s->flow_match_sample_en)
+ continue;
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_en, !!s->flow_match_sample_en);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_offset,
+ s->flow_match_sample_field_offset);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_offset_mode,
+ s->flow_match_sample_offset_mode);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_offset_mask,
+ s->flow_match_sample_field_offset_mask);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_offset_shift,
+ s->flow_match_sample_field_offset_shift);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_field_base_offset,
+ s->flow_match_sample_field_base_offset);
+ MLX5_SET(parse_graph_flow_match_sample, s_off,
+ flow_match_sample_tunnel_mode,
+ s->flow_match_sample_tunnel_mode);
+ }
+ for (i = 0; i < MLX5_GRAPH_NODE_ARC_NUM; i++) {
+ struct mlx5_devx_graph_arc_attr *ia = &data->in[i];
+ struct mlx5_devx_graph_arc_attr *oa = &data->out[i];
+ void *in_off = (void *)((char *)in_arc + i *
+ MLX5_ST_SZ_BYTES(parse_graph_arc));
+ void *out_off = (void *)((char *)out_arc + i *
+ MLX5_ST_SZ_BYTES(parse_graph_arc));
+
+ if (ia->arc_parse_graph_node != 0) {
+ MLX5_SET(parse_graph_arc, in_off,
+ compare_condition_value,
+ ia->compare_condition_value);
+ MLX5_SET(parse_graph_arc, in_off, start_inner_tunnel,
+ ia->start_inner_tunnel);
+ MLX5_SET(parse_graph_arc, in_off, arc_parse_graph_node,
+ ia->arc_parse_graph_node);
+ MLX5_SET(parse_graph_arc, in_off,
+ parse_graph_node_handle,
+ ia->parse_graph_node_handle);
+ }
+ if (oa->arc_parse_graph_node != 0) {
+ MLX5_SET(parse_graph_arc, out_off,
+ compare_condition_value,
+ oa->compare_condition_value);
+ MLX5_SET(parse_graph_arc, out_off, start_inner_tunnel,
+ oa->start_inner_tunnel);
+ MLX5_SET(parse_graph_arc, out_off, arc_parse_graph_node,
+ oa->arc_parse_graph_node);
+ MLX5_SET(parse_graph_arc, out_off,
+ parse_graph_node_handle,
+ oa->parse_graph_node_handle);
+ }
+ }
+ parse_flex_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in),
+ out, sizeof(out));
+ if (!parse_flex_obj->obj) {
+ rte_errno = errno;
+ DRV_LOG(ERR, "Failed to create FLEX PARSE GRAPH object "
+ "by using DevX.");
+ rte_free(parse_flex_obj);
+ return NULL;
+ }
+ parse_flex_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
+ return parse_flex_obj;
+}
+
/**
* Query HCA attributes.
* Using those attributes we can check on run time if the device
@@ -527,6 +688,9 @@ struct mlx5_devx_obj *
attr->vdpa.queue_counters_valid = !!(MLX5_GET64(cmd_hca_cap, hcattr,
general_obj_types) &
MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS);
+ attr->parse_graph_flex_node = !!(MLX5_GET64(cmd_hca_cap, hcattr,
+ general_obj_types) &
+ MLX5_GENERAL_OBJ_TYPES_CAP_PARSE_GRAPH_FLEX_NODE);
attr->wqe_index_ignore = MLX5_GET(cmd_hca_cap, hcattr,
wqe_index_ignore_cap);
attr->cross_channel = MLX5_GET(cmd_hca_cap, hcattr, cd);
@@ -1098,7 +1262,7 @@ struct mlx5_devx_obj *
if (ret) {
DRV_LOG(ERR, "Failed to modify SQ using DevX");
rte_errno = errno;
- return -errno;
+ return -rte_errno;
}
return ret;
}
@@ -1412,7 +1576,7 @@ struct mlx5_devx_obj *
if (ret) {
DRV_LOG(ERR, "Failed to modify VIRTQ using DevX.");
rte_errno = errno;
- return -errno;
+ return -rte_errno;
}
return ret;
}
@@ -1615,7 +1779,7 @@ struct mlx5_devx_obj *
if (ret) {
DRV_LOG(ERR, "Failed to modify QP using DevX.");
rte_errno = errno;
- return -errno;
+ return -rte_errno;
}
return ret;
}
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index 4811a1a..2b39ad2 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -70,6 +70,7 @@ struct mlx5_hca_attr {
uint32_t eswitch_manager:1;
uint32_t flow_counters_dump:1;
uint32_t log_max_rqt_size:5;
+ uint32_t parse_graph_flex_node:1;
uint8_t flow_counter_bulk_alloc_bitmap;
uint32_t eth_net_offloads:1;
uint32_t eth_virt:1;
@@ -426,6 +427,13 @@ int mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp,
__rte_internal
int mlx5_devx_cmd_modify_rqt(struct mlx5_devx_obj *rqt,
struct mlx5_devx_rqt_attr *rqt_attr);
+__rte_internal
+int mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
+ uint32_t ids[], uint32_t num);
+
+__rte_internal
+struct mlx5_devx_obj *mlx5_devx_cmd_create_flex_parser(void *ctx,
+ struct mlx5_devx_graph_node_attr *data);
__rte_internal
int mlx5_devx_cmd_register_read(void *ctx, uint16_t reg_id,
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 81ff3d0..ec3b600 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -994,10 +994,9 @@ enum {
MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1,
};
-enum {
- MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q = (1ULL << 0xd),
- MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS = (1ULL << 0x1c),
-};
+#define MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q (1ULL << 0xd)
+#define MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS (1ULL << 0x1c)
+#define MLX5_GENERAL_OBJ_TYPES_CAP_PARSE_GRAPH_FLEX_NODE (1ULL << 0x22)
enum {
MLX5_HCA_CAP_OPMOD_GET_MAX = 0,
@@ -2068,6 +2067,7 @@ struct mlx5_ifc_create_cq_in_bits {
enum {
MLX5_GENERAL_OBJ_TYPE_VIRTQ = 0x000d,
MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS = 0x001c,
+ MLX5_GENERAL_OBJ_TYPE_FLEX_PARSE_GRAPH = 0x0022,
};
struct mlx5_ifc_general_obj_in_cmd_hdr_bits {
@@ -2611,6 +2611,67 @@ struct mlx5_ifc_register_mtutc_bits {
#define MLX5_MTUTC_TIMESTAMP_MODE_INTERNAL_TIMER 0
#define MLX5_MTUTC_TIMESTAMP_MODE_REAL_TIME 1
+struct mlx5_ifc_parse_graph_arc_bits {
+ u8 start_inner_tunnel[0x1];
+ u8 reserved_at_1[0x7];
+ u8 arc_parse_graph_node[0x8];
+ u8 compare_condition_value[0x10];
+ u8 parse_graph_node_handle[0x20];
+ u8 reserved_at_40[0x40];
+};
+
+struct mlx5_ifc_parse_graph_flow_match_sample_bits {
+ u8 flow_match_sample_en[0x1];
+ u8 reserved_at_1[0x3];
+ u8 flow_match_sample_offset_mode[0x4];
+ u8 reserved_at_5[0x8];
+ u8 flow_match_sample_field_offset[0x10];
+ u8 reserved_at_32[0x4];
+ u8 flow_match_sample_field_offset_shift[0x4];
+ u8 flow_match_sample_field_base_offset[0x8];
+ u8 reserved_at_48[0xd];
+ u8 flow_match_sample_tunnel_mode[0x3];
+ u8 flow_match_sample_field_offset_mask[0x20];
+ u8 flow_match_sample_field_id[0x20];
+};
+
+struct mlx5_ifc_parse_graph_flex_bits {
+ u8 modify_field_select[0x40];
+ u8 reserved_at_64[0x20];
+ u8 header_length_base_value[0x10];
+ u8 reserved_at_112[0x4];
+ u8 header_length_field_shift[0x4];
+ u8 reserved_at_120[0x4];
+ u8 header_length_mode[0x4];
+ u8 header_length_field_offset[0x10];
+ u8 next_header_field_offset[0x10];
+ u8 reserved_at_160[0x1b];
+ u8 next_header_field_size[0x5];
+ u8 header_length_field_mask[0x20];
+ u8 reserved_at_224[0x20];
+ struct mlx5_ifc_parse_graph_flow_match_sample_bits sample_table[0x8];
+ struct mlx5_ifc_parse_graph_arc_bits input_arc[0x8];
+ struct mlx5_ifc_parse_graph_arc_bits output_arc[0x8];
+};
+
+struct mlx5_ifc_create_flex_parser_in_bits {
+ struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+ struct mlx5_ifc_parse_graph_flex_bits flex;
+};
+
+struct mlx5_ifc_create_flex_parser_out_bits {
+ struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+ struct mlx5_ifc_parse_graph_flex_bits flex;
+};
+
+struct mlx5_ifc_parse_graph_flex_out_bits {
+ u8 status[0x8];
+ u8 reserved_at_8[0x18];
+ u8 syndrome[0x20];
+ u8 reserved_at_40[0x40];
+ struct mlx5_ifc_parse_graph_flex_bits capability;
+};
+
/* CQE format mask. */
#define MLX5E_CQE_FORMAT_MASK 0xc
diff --git a/drivers/common/mlx5/rte_common_mlx5_version.map b/drivers/common/mlx5/rte_common_mlx5_version.map
index 68007ef..5aad219 100644
--- a/drivers/common/mlx5/rte_common_mlx5_version.map
+++ b/drivers/common/mlx5/rte_common_mlx5_version.map
@@ -11,6 +11,7 @@ INTERNAL {
mlx5_dev_to_pci_addr;
mlx5_devx_cmd_create_cq;
+ mlx5_devx_cmd_create_flex_parser;
mlx5_devx_cmd_create_qp;
mlx5_devx_cmd_create_rq;
mlx5_devx_cmd_create_rqt;
@@ -32,6 +33,7 @@ INTERNAL {
mlx5_devx_cmd_modify_virtq;
mlx5_devx_cmd_qp_query_tis_td;
mlx5_devx_cmd_query_hca_attr;
+ mlx5_devx_cmd_query_parse_samples;
mlx5_devx_cmd_query_virtio_q_counters;
mlx5_devx_cmd_query_virtq;
mlx5_devx_cmd_register_read;
--
1.8.3.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v4 5/7] net/mlx5: create and destroy eCPRI flex parser
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver Bing Zhao
` (3 preceding siblings ...)
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 4/7] common/mlx5: adding DevX command for flex parsers Bing Zhao
@ 2020-07-17 7:11 ` Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 6/7] net/mlx5: add eCPRI flex parser capacity check Bing Zhao
` (2 subsequent siblings)
7 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-17 7:11 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
eCPRI protocol has unified format layout for the variants, over
ETH layer (including .1Q) and UDP layer.
The common header of the message has 4 bytes fixed length, and the
message payload layers are different based on the type field. Now
only type #0, #2 and #5 will be supported, and 2 bytes are needed.
When creating the flex parser, the header will be extended to 8
bytes and 2 DW samples are needed. The 1st DW starts from offset 0
and will be used for the type field of the common header. The 2nd
DW starts from offset 4 and will be used for the physical channel
ID, real-time control ID or measurement ID fields.
The parser will be created once a flow with eCPRI item is observed
for the first time. After creating, it will remain in the system
and HW until the device is stopped. Right now, there is no need to
destroy the eCPRI flex parser after the last flow with eCPRI item
is destroyed. This is to get rid of the alternate states of creating
and destroying eCPRI flex parser with a single eCPRI flow.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
drivers/net/mlx5/mlx5.c | 64 ++++++++++++++++++++++++++++++++++++++++-
drivers/net/mlx5/mlx5.h | 1 +
drivers/net/mlx5/mlx5_flow_dv.c | 3 +-
3 files changed, 66 insertions(+), 2 deletions(-)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 1ba5e0c..8fcb78a 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -643,11 +643,71 @@ struct mlx5_flow_id_pool *
struct mlx5_priv *priv = dev->data->dev_private;
struct mlx5_flex_parser_profiles *prf =
&priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
+ struct mlx5_devx_graph_node_attr node = {
+ .modify_field_select = 0,
+ };
+ uint32_t ids[8];
+ int ret;
- (void)prf;
+ node.header_length_mode = MLX5_GRAPH_NODE_LEN_FIXED;
+ /* 8 bytes now: 4B common header + 4B message body header. */
+ node.header_length_base_value = 0x8;
+ /* After MAC layer: Ether / VLAN. */
+ node.in[0].arc_parse_graph_node = MLX5_GRAPH_ARC_NODE_MAC;
+ /* Type of compared condition should be 0xAEFE in the L2 layer. */
+ node.in[0].compare_condition_value = RTE_ETHER_TYPE_ECPRI;
+ /* Sample #0: type in common header. */
+ node.sample[0].flow_match_sample_en = 1;
+ /* Fixed offset. */
+ node.sample[0].flow_match_sample_offset_mode = 0x0;
+ /* Only the 2nd byte will be used. */
+ node.sample[0].flow_match_sample_field_base_offset = 0x0;
+ /* Sample #1: message payload. */
+ node.sample[1].flow_match_sample_en = 1;
+ /* Fixed offset. */
+ node.sample[1].flow_match_sample_offset_mode = 0x0;
+ /*
+ * Only the first two bytes will be used right now, and its offset will
+ * start after the common header that with the length of a DW(u32).
+ */
+ node.sample[1].flow_match_sample_field_base_offset = sizeof(uint32_t);
+ prf->obj = mlx5_devx_cmd_create_flex_parser(priv->sh->ctx, &node);
+ if (!prf->obj) {
+ DRV_LOG(ERR, "Failed to create flex parser node object.");
+ return (rte_errno == 0) ? -ENODEV : -rte_errno;
+ }
+ prf->num = 2;
+ ret = mlx5_devx_cmd_query_parse_samples(prf->obj, ids, prf->num);
+ if (ret) {
+ DRV_LOG(ERR, "Failed to query sample IDs.");
+ return (rte_errno == 0) ? -ENODEV : -rte_errno;
+ }
+ prf->offset[0] = 0x0;
+ prf->offset[1] = sizeof(uint32_t);
+ prf->ids[0] = ids[0];
+ prf->ids[1] = ids[1];
return 0;
}
+/*
+ * Destroy the flex parser node, including the parser itself, input / output
+ * arcs and DW samples. Resources could be reused then.
+ *
+ * @param dev
+ * Pointer to Ethernet device structure.
+ */
+static void
+mlx5_flex_parser_ecpri_release(struct rte_eth_dev *dev)
+{
+ struct mlx5_priv *priv = dev->data->dev_private;
+ struct mlx5_flex_parser_profiles *prf =
+ &priv->sh->fp[MLX5_FLEX_PARSER_ECPRI_0];
+
+ if (prf->obj)
+ mlx5_devx_cmd_destroy(prf->obj);
+ prf->obj = NULL;
+}
+
/**
* Allocate shared device context. If there is multiport device the
* master and representors will share this context, if there is single
@@ -1231,6 +1291,8 @@ struct mlx5_dev_ctx_shared *
rte_wmb();
/* Disable datapath on secondary process. */
mlx5_mp_req_stop_rxtx(dev);
+ /* Free the eCPRI flex parser resource. */
+ mlx5_flex_parser_ecpri_release(dev);
if (priv->rxqs != NULL) {
/* XXX race condition if mlx5_rx_burst() is still running. */
usleep(1000);
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index 61abe4a..2e61d0c 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -1025,6 +1025,7 @@ int mlx5_os_read_dev_stat(struct mlx5_priv *priv,
void mlx5_os_stats_init(struct rte_eth_dev *dev);
void mlx5_os_set_reg_mr_cb(mlx5_reg_mr_t *reg_mr_cb,
mlx5_dereg_mr_t *dereg_mr_cb);
+
/* mlx5_txpp.c */
int mlx5_txpp_start(struct rte_eth_dev *dev);
diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c
index cd2b0f0..ceb585d 100644
--- a/drivers/net/mlx5/mlx5_flow_dv.c
+++ b/drivers/net/mlx5/mlx5_flow_dv.c
@@ -8662,10 +8662,11 @@ struct field_modify_info modify_tcp[] = {
break;
case RTE_FLOW_ITEM_TYPE_ECPRI:
if (!mlx5_flex_parser_ecpri_exist(dev)) {
+ /* Create it only the first time to be used. */
ret = mlx5_flex_parser_ecpri_alloc(dev);
if (ret)
return rte_flow_error_set
- (error, ret,
+ (error, -ret,
RTE_FLOW_ERROR_TYPE_ITEM,
NULL,
"cannot create eCPRI parser");
--
1.8.3.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v4 6/7] net/mlx5: add eCPRI flex parser capacity check
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver Bing Zhao
` (4 preceding siblings ...)
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 5/7] net/mlx5: create and destroy eCPRI flex parser Bing Zhao
@ 2020-07-17 7:11 ` Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 7/7] doc: update release notes and guides for eCPRI Bing Zhao
2020-07-17 12:55 ` [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver Raslan Darawsheh
7 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-17 7:11 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
If the NIC or the FW does not support the dynamic flex parser,
it will return error when trying to create the parser for eCRPI.
Then it is hard to know the detail error reason of the failure.
Before creating the parser node and the following usage of the
parser, the capacity bit saved in the HCA_CAP could be used to
confirm if the dynamic flex parser is supported.
If no, an error will be returned directly with ENOTSUP to prevent
the following steps to be executed.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
v3: fix the wrong member name in the private structure.
---
drivers/net/mlx5/mlx5.c | 5 +++++
1 file changed, 5 insertions(+)
diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c
index 8fcb78a..723c1dd 100644
--- a/drivers/net/mlx5/mlx5.c
+++ b/drivers/net/mlx5/mlx5.c
@@ -649,6 +649,11 @@ struct mlx5_flow_id_pool *
uint32_t ids[8];
int ret;
+ if (!priv->config.hca_attr.parse_graph_flex_node) {
+ DRV_LOG(ERR, "Dynamic flex parser is not supported "
+ "for device %s.", priv->dev_data->name);
+ return -ENOTSUP;
+ }
node.header_length_mode = MLX5_GRAPH_NODE_LEN_FIXED;
/* 8 bytes now: 4B common header + 4B message body header. */
node.header_length_base_value = 0x8;
--
1.8.3.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* [dpdk-dev] [PATCH v4 7/7] doc: update release notes and guides for eCPRI
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver Bing Zhao
` (5 preceding siblings ...)
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 6/7] net/mlx5: add eCPRI flex parser capacity check Bing Zhao
@ 2020-07-17 7:11 ` Bing Zhao
2020-07-17 12:55 ` [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver Raslan Darawsheh
7 siblings, 0 replies; 40+ messages in thread
From: Bing Zhao @ 2020-07-17 7:11 UTC (permalink / raw)
To: orika, viacheslavo; +Cc: rasland, matan, dev, netanelg
Update the release notes of mlx5 PMD part by adding the
support of eCPRI.
Update the firmware configuration in the mlx5 NIC guide to support
the usage of eCPRI.
Signed-off-by: Bing Zhao <bingz@mellanox.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
---
doc/guides/nics/mlx5.rst | 5 +++++
doc/guides/rel_notes/release_20_08.rst | 1 +
2 files changed, 6 insertions(+)
diff --git a/doc/guides/nics/mlx5.rst b/doc/guides/nics/mlx5.rst
index 9a57768..c185129 100644
--- a/doc/guides/nics/mlx5.rst
+++ b/doc/guides/nics/mlx5.rst
@@ -993,6 +993,11 @@ Below are some firmware configurations listed.
FLEX_PARSER_PROFILE_ENABLE=3
+- enable eCPRI flow matching::
+
+ FLEX_PARSER_PROFILE_ENABLE=4
+ PROG_PARSE_GRAPH=1
+
Prerequisites
-------------
diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index f19b748..6f44ffd 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -122,6 +122,7 @@ New Features
* Added new PMD devarg ``reclaim_mem_mode``.
* Added new devarg ``lacp_by_user``.
+ * Added support for eCPRI protocol offloading.
* **Added vDPA device APIs to query virtio queue statistics.**
--
1.8.3.1
^ permalink raw reply [flat|nested] 40+ messages in thread
* Re: [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver Bing Zhao
` (6 preceding siblings ...)
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 7/7] doc: update release notes and guides for eCPRI Bing Zhao
@ 2020-07-17 12:55 ` Raslan Darawsheh
7 siblings, 0 replies; 40+ messages in thread
From: Raslan Darawsheh @ 2020-07-17 12:55 UTC (permalink / raw)
To: Bing Zhao, Ori Kam, Slava Ovsiienko; +Cc: Matan Azrad, dev, Netanel Gonen
Hi,
> -----Original Message-----
> From: Bing Zhao <bingz@mellanox.com>
> Sent: Friday, July 17, 2020 10:12 AM
> To: Ori Kam <orika@mellanox.com>; Slava Ovsiienko
> <viacheslavo@mellanox.com>
> Cc: Raslan Darawsheh <rasland@mellanox.com>; Matan Azrad
> <matan@mellanox.com>; dev@dpdk.org; Netanel Gonen
> <netanelg@mellanox.com>
> Subject: [PATCH v4 0/7] add eCPRI support in mlx5 driver
>
> This patch set is to add the eCPRI support of flow rules in mlx5 PMD
> driver. Right now, only eCPRI over Ethernet layer (including VLAN)
> is supported. eCPRI over UDP will be supported in the future. If the
> flow rule to be inserted is not supported, PMD driver will return
> error to indicate the reason of the failure.
>
> v2: listed as below
> 1. added document updates
> 2. add NIC / FW capacity check
> 3. fix mask of type in common header check and code cleanup
> v3: fix the wrong member name in the private structure
> v4: rebased and resolve conflict
>
> Bing Zhao (7):
> net/mlx5: add flow validation of eCPRI header
> net/mlx5: add flow translation of eCPRI header
> common/mlx5: add flex parser DevX structures
> common/mlx5: adding DevX command for flex parsers
> net/mlx5: create and destroy eCPRI flex parser
> net/mlx5: add eCPRI flex parser capacity check
> doc: update release notes and guides for eCPRI
>
> doc/guides/nics/mlx5.rst | 5 +
> doc/guides/rel_notes/release_20_08.rst | 1 +
> drivers/common/mlx5/mlx5_devx_cmds.c | 170
> +++++++++++++++++++++++-
> drivers/common/mlx5/mlx5_devx_cmds.h | 52 ++++++++
> drivers/common/mlx5/mlx5_prm.h | 115 +++++++++++++++-
> drivers/common/mlx5/rte_common_mlx5_version.map | 2 +
> drivers/net/mlx5/mlx5.c | 107 +++++++++++++++
> drivers/net/mlx5/mlx5.h | 19 +++
> drivers/net/mlx5/mlx5_flow.c | 107 ++++++++++++++-
> drivers/net/mlx5/mlx5_flow.h | 9 ++
> drivers/net/mlx5/mlx5_flow_dv.c | 126 ++++++++++++++++++
> 11 files changed, 704 insertions(+), 9 deletions(-)
>
> --
> 1.8.3.1
Series applied to next-net-mlx,
Kindest regards,
Raslan Darawsheh
^ permalink raw reply [flat|nested] 40+ messages in thread
end of thread, other threads:[~2020-07-17 12:55 UTC | newest]
Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-08 14:43 [dpdk-dev] [PATCH 0/5] add eCPRI support in mlx5 driver Bing Zhao
2020-07-08 14:43 ` [dpdk-dev] [PATCH 1/5] net/mlx5: add flow validation of eCPRI header Bing Zhao
2020-07-08 14:43 ` [dpdk-dev] [PATCH 2/5] net/mlx5: add flow translation " Bing Zhao
2020-07-09 12:22 ` Thomas Monjalon
2020-07-09 14:47 ` Bing Zhao
2020-07-08 14:43 ` [dpdk-dev] [PATCH 3/5] net/mlx5: add flex parser devx structures Bing Zhao
2020-07-08 14:43 ` [dpdk-dev] [PATCH 4/5] net/mlx5: adding Devx command for flex parsers Bing Zhao
2020-07-08 14:43 ` [dpdk-dev] [PATCH 5/5] net/mlx5: create and destroy eCPRI flex parser Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 0/7] add eCPRI support in mlx5 driver Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 1/7] net/mlx5: add flow validation of eCPRI header Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 2/7] net/mlx5: add flow translation " Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 3/7] common/mlx5: add flex parser DevX structures Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 4/7] common/mlx5: adding DevX command for flex parsers Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 5/7] net/mlx5: create and destroy eCPRI flex parser Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 6/7] net/mlx5: add eCPRI flex parser capacity check Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 7/7] doc: update release notes and guides for eCPRI Bing Zhao
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 0/7] add eCPRI support in mlx5 driver Bing Zhao
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 1/7] net/mlx5: add flow validation of eCPRI header Bing Zhao
2020-07-16 15:04 ` Slava Ovsiienko
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 2/7] net/mlx5: add flow translation " Bing Zhao
2020-07-16 15:04 ` Slava Ovsiienko
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 3/7] common/mlx5: add flex parser DevX structures Bing Zhao
2020-07-16 15:04 ` Slava Ovsiienko
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 4/7] common/mlx5: adding DevX command for flex parsers Bing Zhao
2020-07-16 15:05 ` Slava Ovsiienko
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 5/7] net/mlx5: create and destroy eCPRI flex parser Bing Zhao
2020-07-16 15:05 ` Slava Ovsiienko
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 6/7] net/mlx5: add eCPRI flex parser capacity check Bing Zhao
2020-07-16 15:05 ` Slava Ovsiienko
2020-07-16 14:23 ` [dpdk-dev] [PATCH v3 7/7] doc: update release notes and guides for eCPRI Bing Zhao
2020-07-16 15:05 ` Slava Ovsiienko
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 1/7] net/mlx5: add flow validation of eCPRI header Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 2/7] net/mlx5: add flow translation " Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 3/7] common/mlx5: add flex parser DevX structures Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 4/7] common/mlx5: adding DevX command for flex parsers Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 5/7] net/mlx5: create and destroy eCPRI flex parser Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 6/7] net/mlx5: add eCPRI flex parser capacity check Bing Zhao
2020-07-17 7:11 ` [dpdk-dev] [PATCH v4 7/7] doc: update release notes and guides for eCPRI Bing Zhao
2020-07-17 12:55 ` [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver Raslan Darawsheh
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).