DPDK patches and discussions
 help / color / mirror / Atom feed
From: Bing Zhao <bingz@mellanox.com>
To: orika@mellanox.com, viacheslavo@mellanox.com
Cc: rasland@mellanox.com, matan@mellanox.com, dev@dpdk.org,
	Netanel Gonen <netanelg@r-arch-host16.mtr.labs.mlnx>,
	Netanel Gonen <netanelg@mellanox.com>
Subject: [dpdk-dev] [PATCH 4/5] net/mlx5: adding Devx command for flex parsers
Date: Wed,  8 Jul 2020 22:43:06 +0800
Message-ID: <1594219387-240274-5-git-send-email-bingz@mellanox.com> (raw)
In-Reply-To: <1594219387-240274-1-git-send-email-bingz@mellanox.com>

From: Netanel Gonen <netanelg@r-arch-host16.mtr.labs.mlnx>

In order to use dynamic flex parser to parse protocols that is not
supported natively, two steps are needed.

Firstly, creating the parse graph node. There are three parts of the
flex parser: node, arc and sample. Node is the whole structure of a
flex parser, when creating, the length of the protocol should be
specified. Then the input arc(s) is(are) mandatory, it will tell the
HW when to use this parser to parse the packet. For a single parser
node, up to 8 input arcs could be supported and it gives SW ability
to support this protocol over multiple layers. The output arc is
optional and also up to 8 arcs could be supported. If the protocol
is the last header of the stack, then output arc should be NULL. Or
else it should be specified. The protocol type in the arc is used to
indicate the parser pointing to or from this flex parser node. For
output arc, the next header type field offset and size should be set
in the node structure, then the HW could get the proper type of the
next header and decide which parser to point to.
Note: the parsers have two types now, native parser and flex parser.
The arc between two flex parsers are not supported in this stage.

Secondly, querying the sample IDs. If the protocol header parsed
with flex parser needs to used in flow rule offloading, the DW
samples are needed when creating the parse graph node. The offset
of bytes starting from the header needs to be set. After creating
the node successfully, a general object handle will be returned.
This object could be queryed with Devx command to get the sample
IDs.
When creating a flow, sample IDs could be used to sample a DW from
the parsed header - 4 continuous bytes starting from the offset. The
flow entry could specify some mask to use part of this DW for
matching. Up to 8 samples could be supported for a single parse
graph node. The offset should not exceed the header length.

The HW resources have some limitation, low layer driver error should
be checked once there is a failure of creating parse graph node.

Signed-off-by: Netanel Gonen <netanelg@mellanox.com>
Signed-off-by: Bing Zhao <bingz@mellanox.com>
---
 drivers/common/mlx5/mlx5_devx_cmds.c            | 168 +++++++++++++++++++++++-
 drivers/common/mlx5/mlx5_devx_cmds.h            |   8 ++
 drivers/common/mlx5/mlx5_prm.h                  |  69 +++++++++-
 drivers/common/mlx5/rte_common_mlx5_version.map |   2 +
 4 files changed, 240 insertions(+), 7 deletions(-)

diff --git a/drivers/common/mlx5/mlx5_devx_cmds.c b/drivers/common/mlx5/mlx5_devx_cmds.c
index ec92eb6..4bad466 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.c
+++ b/drivers/common/mlx5/mlx5_devx_cmds.c
@@ -396,6 +396,165 @@ struct mlx5_devx_obj *
 	}
 }
 
+int
+mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
+				  uint32_t ids[], uint32_t num)
+{
+	uint32_t in[MLX5_ST_SZ_DW(general_obj_in_cmd_hdr)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(create_flex_parser_out)] = {0};
+	void *hdr = MLX5_ADDR_OF(create_flex_parser_out, in, hdr);
+	void *flex = MLX5_ADDR_OF(create_flex_parser_out, out, flex);
+	void *sample = MLX5_ADDR_OF(parse_graph_flex, flex, sample_table);
+	int ret;
+	uint32_t idx = 0;
+	uint32_t i;
+
+	if (num > 8) {
+		rte_errno = EINVAL;
+		DRV_LOG(ERR, "Too many sample IDs to be fetched.");
+		return -rte_errno;
+	}
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+		 MLX5_CMD_OP_QUERY_GENERAL_OBJECT);
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+		 MLX5_GENERAL_OBJ_TYPE_FLEX_PARSE_GRAPH);
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_id, flex_obj->id);
+	ret = mlx5_glue->devx_obj_query(flex_obj->obj, in, sizeof(in),
+					out, sizeof(out));
+	if (ret) {
+		rte_errno = errno;
+		DRV_LOG(ERR, "Failed to query sample IDs with object %p.",
+			flex_obj);
+		return -rte_errno;
+	}
+	for (i = 0; i < 8; i++) {
+		void *s_off = (void *)((char *)sample + i *
+			      MLX5_ST_SZ_BYTES(parse_graph_flow_match_sample));
+		uint32_t en;
+
+		en = MLX5_GET(parse_graph_flow_match_sample, s_off,
+			      flow_match_sample_en);
+		if (!en)
+			continue;
+		ids[idx++] = MLX5_GET(parse_graph_flow_match_sample, s_off,
+				  flow_match_sample_field_id);
+	}
+	if (num != idx) {
+		rte_errno = EINVAL;
+		DRV_LOG(ERR, "Number of sample IDs are not as expected.");
+		return -rte_errno;
+	}
+	return ret;
+}
+
+
+struct mlx5_devx_obj *
+mlx5_devx_cmd_create_flex_parser(void* ctx,
+			      struct mlx5_devx_graph_node_attr *data)
+{
+	uint32_t in[MLX5_ST_SZ_DW(create_flex_parser_in)] = {0};
+	uint32_t out[MLX5_ST_SZ_DW(general_obj_out_cmd_hdr)] = {0};
+	void *hdr = MLX5_ADDR_OF(create_flex_parser_in, in, hdr);
+	void *flex = MLX5_ADDR_OF(create_flex_parser_in, in, flex);
+	void *sample = MLX5_ADDR_OF(parse_graph_flex, flex, sample_table);
+	void *in_arc = MLX5_ADDR_OF(parse_graph_flex, flex, input_arc);
+	void *out_arc = MLX5_ADDR_OF(parse_graph_flex, flex, output_arc);
+	struct mlx5_devx_obj *parse_flex_obj = NULL;
+	uint32_t i;
+
+	parse_flex_obj = rte_calloc(__func__, 1, sizeof(*parse_flex_obj), 0);
+	if (!parse_flex_obj) {
+		DRV_LOG(ERR, "Failed to allocate flex parser data");
+		rte_errno = ENOMEM;
+		rte_free(in);
+		return NULL;
+	}
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, opcode,
+		 MLX5_CMD_OP_CREATE_GENERAL_OBJECT);
+	MLX5_SET(general_obj_in_cmd_hdr, hdr, obj_type,
+		 MLX5_GENERAL_OBJ_TYPE_FLEX_PARSE_GRAPH);
+	MLX5_SET(parse_graph_flex, flex, header_length_mode,
+		 data->header_length_mode);
+	MLX5_SET(parse_graph_flex, flex, header_length_base_value,
+		 data->header_length_base_value);
+	MLX5_SET(parse_graph_flex, flex, header_length_field_offset,
+		 data->header_length_field_offset);
+	MLX5_SET(parse_graph_flex, flex, header_length_field_shift,
+		 data->header_length_field_shift);
+	MLX5_SET(parse_graph_flex, flex, header_length_field_mask,
+		 data->header_length_field_mask);
+	for (i = 0; i < 8; i++) {
+		struct mlx5_devx_match_sample_attr *s = &data->sample[i];
+		void *s_off = (void *)((char *)sample + i *
+			      MLX5_ST_SZ_BYTES(parse_graph_flow_match_sample));
+
+		if (!s->flow_match_sample_en)
+			continue;
+		MLX5_SET(parse_graph_flow_match_sample, s_off,
+			 flow_match_sample_en, !!s->flow_match_sample_en);
+		MLX5_SET(parse_graph_flow_match_sample, s_off,
+			 flow_match_sample_field_offset,
+			 s->flow_match_sample_field_offset);
+		MLX5_SET(parse_graph_flow_match_sample, s_off,
+			 flow_match_sample_offset_mode,
+			 s->flow_match_sample_offset_mode);
+		MLX5_SET(parse_graph_flow_match_sample, s_off,
+			 flow_match_sample_field_offset_mask,
+			 s->flow_match_sample_field_offset_mask);
+		MLX5_SET(parse_graph_flow_match_sample, s_off,
+			 flow_match_sample_field_offset_shift,
+			 s->flow_match_sample_field_offset_shift);
+		MLX5_SET(parse_graph_flow_match_sample, s_off,
+			 flow_match_sample_field_base_offset,
+			 s->flow_match_sample_field_base_offset);
+		MLX5_SET(parse_graph_flow_match_sample, s_off,
+			 flow_match_sample_tunnel_mode,
+			 s->flow_match_sample_tunnel_mode);
+	}
+	for (i = 0; i < 8; i++) {
+		struct mlx5_devx_graph_arc_attr *ia = &data->in[i];
+		struct mlx5_devx_graph_arc_attr *oa = &data->out[i];
+		void *in_off = (void *)((char *)in_arc + i *
+			      MLX5_ST_SZ_BYTES(parse_graph_arc));
+		void *out_off = (void *)((char *)out_arc + i *
+			      MLX5_ST_SZ_BYTES(parse_graph_arc));
+
+		if (ia->arc_parse_graph_node != 0) {
+			MLX5_SET(parse_graph_arc, in_off,
+				 compare_condition_value,
+				 ia->compare_condition_value);
+			MLX5_SET(parse_graph_arc, in_off, start_inner_tunnel,
+				 ia->start_inner_tunnel);
+			MLX5_SET(parse_graph_arc, in_off, arc_parse_graph_node,
+				 ia->arc_parse_graph_node);
+			MLX5_SET(parse_graph_arc, in_off, parse_graph_node_handle,
+				 ia->parse_graph_node_handle);
+		}
+		if (oa->arc_parse_graph_node != 0) {
+			MLX5_SET(parse_graph_arc, out_off,
+				 compare_condition_value,
+				 oa->compare_condition_value);
+			MLX5_SET(parse_graph_arc, out_off, start_inner_tunnel,
+				 oa->start_inner_tunnel);
+			MLX5_SET(parse_graph_arc, out_off, arc_parse_graph_node,
+				 oa->arc_parse_graph_node);
+			MLX5_SET(parse_graph_arc, out_off, parse_graph_node_handle,
+				 oa->parse_graph_node_handle);
+		}
+	}
+	parse_flex_obj->obj = mlx5_glue->devx_obj_create(ctx, in, sizeof(in),
+							 out, sizeof(out));
+	if (!parse_flex_obj->obj) {
+		rte_errno = errno;
+		DRV_LOG(ERR, "Failed to create FLEX PARSE GRAPH object "
+			"by using DevX.");
+		rte_free(parse_flex_obj);
+		return NULL;
+	}
+	parse_flex_obj->id = MLX5_GET(general_obj_out_cmd_hdr, out, obj_id);
+	return parse_flex_obj;
+}
+
 /**
  * Query HCA attributes.
  * Using those attributes we can check on run time if the device
@@ -467,6 +626,9 @@ struct mlx5_devx_obj *
 	attr->vdpa.queue_counters_valid = !!(MLX5_GET64(cmd_hca_cap, hcattr,
 							general_obj_types) &
 				  MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS);
+	attr->parse_graph_flex_node = !!(MLX5_GET64(cmd_hca_cap, hcattr,
+					 general_obj_types) &
+			      MLX5_GENERAL_OBJ_TYPES_CAP_PARSE_GRAPH_FLEX_NODE);
 	if (attr->qos.sup) {
 		MLX5_SET(query_hca_cap_in, in, op_mod,
 			 MLX5_GET_HCA_CAP_OP_MOD_QOS_CAP |
@@ -1024,7 +1186,7 @@ struct mlx5_devx_obj *
 	if (ret) {
 		DRV_LOG(ERR, "Failed to modify SQ using DevX");
 		rte_errno = errno;
-		return -errno;
+		return -rte_errno;
 	}
 	return ret;
 }
@@ -1337,7 +1499,7 @@ struct mlx5_devx_obj *
 	if (ret) {
 		DRV_LOG(ERR, "Failed to modify VIRTQ using DevX.");
 		rte_errno = errno;
-		return -errno;
+		return -rte_errno;
 	}
 	return ret;
 }
@@ -1540,7 +1702,7 @@ struct mlx5_devx_obj *
 	if (ret) {
 		DRV_LOG(ERR, "Failed to modify QP using DevX.");
 		rte_errno = errno;
-		return -errno;
+		return -rte_errno;
 	}
 	return ret;
 }
diff --git a/drivers/common/mlx5/mlx5_devx_cmds.h b/drivers/common/mlx5/mlx5_devx_cmds.h
index faabfb1..9a91649 100644
--- a/drivers/common/mlx5/mlx5_devx_cmds.h
+++ b/drivers/common/mlx5/mlx5_devx_cmds.h
@@ -68,6 +68,7 @@ struct mlx5_hca_attr {
 	uint32_t eswitch_manager:1;
 	uint32_t flow_counters_dump:1;
 	uint32_t log_max_rqt_size:5;
+	uint32_t parse_graph_flex_node:1;
 	uint8_t flow_counter_bulk_alloc_bitmap;
 	uint32_t eth_net_offloads:1;
 	uint32_t eth_virt:1;
@@ -416,6 +417,13 @@ int mlx5_devx_cmd_modify_qp_state(struct mlx5_devx_obj *qp,
 __rte_internal
 int mlx5_devx_cmd_modify_rqt(struct mlx5_devx_obj *rqt,
 			     struct mlx5_devx_rqt_attr *rqt_attr);
+__rte_internal
+int mlx5_devx_cmd_query_parse_samples(struct mlx5_devx_obj *flex_obj,
+				      uint32_t ids[], uint32_t num);
+
+__rte_internal
+struct mlx5_devx_obj *mlx5_devx_cmd_create_flex_parser(void* ctx,
+					struct mlx5_devx_graph_node_attr *data);
 
 /**
  * Create virtio queue counters object DevX API.
diff --git a/drivers/common/mlx5/mlx5_prm.h b/drivers/common/mlx5/mlx5_prm.h
index 9fed365..2b63d5a 100644
--- a/drivers/common/mlx5/mlx5_prm.h
+++ b/drivers/common/mlx5/mlx5_prm.h
@@ -961,10 +961,9 @@ enum {
 	MLX5_GET_HCA_CAP_OP_MOD_VDPA_EMULATION = 0x13 << 1,
 };
 
-enum {
-	MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q = (1ULL << 0xd),
-	MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS = (1ULL << 0x1c),
-};
+#define MLX5_GENERAL_OBJ_TYPES_CAP_VIRTQ_NET_Q			(1ULL << 0xd)
+#define MLX5_GENERAL_OBJ_TYPES_CAP_VIRTIO_Q_COUNTERS		(1ULL << 0x1c)
+#define MLX5_GENERAL_OBJ_TYPES_CAP_PARSE_GRAPH_FLEX_NODE	(1ULL << 0x22)
 
 enum {
 	MLX5_HCA_CAP_OPMOD_GET_MAX   = 0,
@@ -2022,6 +2021,7 @@ struct mlx5_ifc_create_cq_in_bits {
 enum {
 	MLX5_GENERAL_OBJ_TYPE_VIRTQ = 0x000d,
 	MLX5_GENERAL_OBJ_TYPE_VIRTIO_Q_COUNTERS = 0x001c,
+	MLX5_GENERAL_OBJ_TYPE_FLEX_PARSE_GRAPH = 0x0022,
 };
 
 struct mlx5_ifc_general_obj_in_cmd_hdr_bits {
@@ -2500,6 +2500,67 @@ struct mlx5_ifc_query_qp_in_bits {
 	u8 reserved_at_60[0x20];
 };
 
+struct mlx5_ifc_parse_graph_arc_bits {
+	u8 start_inner_tunnel[0x1];
+	u8 reserved_at_1[0x7];
+	u8 arc_parse_graph_node[0x8];
+	u8 compare_condition_value[0x10];
+	u8 parse_graph_node_handle[0x20];
+	u8 reserved_at_40[0x40];
+};
+
+struct mlx5_ifc_parse_graph_flow_match_sample_bits {
+	u8 flow_match_sample_en[0x1];
+	u8 reserved_at_1[0x3];
+	u8 flow_match_sample_offset_mode[0x4];
+	u8 reserved_at_5[0x8];
+	u8 flow_match_sample_field_offset[0x10];
+	u8 reserved_at_32[0x4];
+	u8 flow_match_sample_field_offset_shift[0x4];
+	u8 flow_match_sample_field_base_offset[0x8];
+	u8 reserved_at_48[0xd];
+	u8 flow_match_sample_tunnel_mode[0x3];
+	u8 flow_match_sample_field_offset_mask[0x20];
+	u8 flow_match_sample_field_id[0x20];
+};
+
+struct mlx5_ifc_parse_graph_flex_bits {
+	u8 modify_field_select[0x40];
+	u8 reserved_at_64[0x20];
+	u8 header_length_base_value[0x10];
+	u8 reserved_at_112[0x4];
+	u8 header_length_field_shift[0x4];
+	u8 reserved_at_120[0x4];
+	u8 header_length_mode[0x4];
+	u8 header_length_field_offset[0x10];
+	u8 next_header_field_offset[0x10];
+	u8 reserved_at_160[0x1b];
+	u8 next_header_field_size[0x5];
+	u8 header_length_field_mask[0x20];
+	u8 reserved_at_224[0x20];
+	struct mlx5_ifc_parse_graph_flow_match_sample_bits sample_table[0x8];
+	struct mlx5_ifc_parse_graph_arc_bits input_arc[0x8];
+	struct mlx5_ifc_parse_graph_arc_bits output_arc[0x8];
+};
+
+struct mlx5_ifc_create_flex_parser_in_bits {
+	struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+	struct mlx5_ifc_parse_graph_flex_bits flex;
+};
+
+struct mlx5_ifc_create_flex_parser_out_bits {
+	struct mlx5_ifc_general_obj_in_cmd_hdr_bits hdr;
+	struct mlx5_ifc_parse_graph_flex_bits flex;
+};
+
+struct mlx5_ifc_parse_graph_flex_out_bits {
+	u8 status[0x8];
+	u8 reserved_at_8[0x18];
+	u8 syndrome[0x20];
+	u8 reserved_at_40[0x40];
+	struct mlx5_ifc_parse_graph_flex_bits capability;
+};
+
 /* CQE format mask. */
 #define MLX5E_CQE_FORMAT_MASK 0xc
 
diff --git a/drivers/common/mlx5/rte_common_mlx5_version.map b/drivers/common/mlx5/rte_common_mlx5_version.map
index ae57ebd..c86497f 100644
--- a/drivers/common/mlx5/rte_common_mlx5_version.map
+++ b/drivers/common/mlx5/rte_common_mlx5_version.map
@@ -11,6 +11,7 @@ INTERNAL {
 	mlx5_dev_to_pci_addr;
 
 	mlx5_devx_cmd_create_cq;
+	mlx5_devx_cmd_create_flex_parser;
 	mlx5_devx_cmd_create_qp;
 	mlx5_devx_cmd_create_rq;
 	mlx5_devx_cmd_create_rqt;
@@ -32,6 +33,7 @@ INTERNAL {
 	mlx5_devx_cmd_modify_virtq;
 	mlx5_devx_cmd_qp_query_tis_td;
 	mlx5_devx_cmd_query_hca_attr;
+	mlx5_devx_cmd_query_parse_samples;
 	mlx5_devx_cmd_query_virtio_q_counters;
 	mlx5_devx_cmd_query_virtq;
 	mlx5_devx_get_out_command_status;
-- 
1.8.3.1


  parent reply	other threads:[~2020-07-08 14:43 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-08 14:43 [dpdk-dev] [PATCH 0/5] add eCPRI support in mlx5 driver Bing Zhao
2020-07-08 14:43 ` [dpdk-dev] [PATCH 1/5] net/mlx5: add flow validation of eCPRI header Bing Zhao
2020-07-08 14:43 ` [dpdk-dev] [PATCH 2/5] net/mlx5: add flow translation " Bing Zhao
2020-07-09 12:22   ` Thomas Monjalon
2020-07-09 14:47     ` Bing Zhao
2020-07-08 14:43 ` [dpdk-dev] [PATCH 3/5] net/mlx5: add flex parser devx structures Bing Zhao
2020-07-08 14:43 ` Bing Zhao [this message]
2020-07-08 14:43 ` [dpdk-dev] [PATCH 5/5] net/mlx5: create and destroy eCPRI flex parser Bing Zhao
2020-07-16 13:49 ` [dpdk-dev] [PATCH v2 0/7] add eCPRI support in mlx5 driver Bing Zhao
2020-07-16 13:49   ` [dpdk-dev] [PATCH v2 1/7] net/mlx5: add flow validation of eCPRI header Bing Zhao
2020-07-16 13:49   ` [dpdk-dev] [PATCH v2 2/7] net/mlx5: add flow translation " Bing Zhao
2020-07-16 13:49   ` [dpdk-dev] [PATCH v2 3/7] common/mlx5: add flex parser DevX structures Bing Zhao
2020-07-16 13:49   ` [dpdk-dev] [PATCH v2 4/7] common/mlx5: adding DevX command for flex parsers Bing Zhao
2020-07-16 13:49   ` [dpdk-dev] [PATCH v2 5/7] net/mlx5: create and destroy eCPRI flex parser Bing Zhao
2020-07-16 13:49   ` [dpdk-dev] [PATCH v2 6/7] net/mlx5: add eCPRI flex parser capacity check Bing Zhao
2020-07-16 13:49   ` [dpdk-dev] [PATCH v2 7/7] doc: update release notes and guides for eCPRI Bing Zhao
2020-07-16 14:23   ` [dpdk-dev] [PATCH v3 0/7] add eCPRI support in mlx5 driver Bing Zhao
2020-07-16 14:23     ` [dpdk-dev] [PATCH v3 1/7] net/mlx5: add flow validation of eCPRI header Bing Zhao
2020-07-16 15:04       ` Slava Ovsiienko
2020-07-16 14:23     ` [dpdk-dev] [PATCH v3 2/7] net/mlx5: add flow translation " Bing Zhao
2020-07-16 15:04       ` Slava Ovsiienko
2020-07-16 14:23     ` [dpdk-dev] [PATCH v3 3/7] common/mlx5: add flex parser DevX structures Bing Zhao
2020-07-16 15:04       ` Slava Ovsiienko
2020-07-16 14:23     ` [dpdk-dev] [PATCH v3 4/7] common/mlx5: adding DevX command for flex parsers Bing Zhao
2020-07-16 15:05       ` Slava Ovsiienko
2020-07-16 14:23     ` [dpdk-dev] [PATCH v3 5/7] net/mlx5: create and destroy eCPRI flex parser Bing Zhao
2020-07-16 15:05       ` Slava Ovsiienko
2020-07-16 14:23     ` [dpdk-dev] [PATCH v3 6/7] net/mlx5: add eCPRI flex parser capacity check Bing Zhao
2020-07-16 15:05       ` Slava Ovsiienko
2020-07-16 14:23     ` [dpdk-dev] [PATCH v3 7/7] doc: update release notes and guides for eCPRI Bing Zhao
2020-07-16 15:05       ` Slava Ovsiienko
2020-07-17  7:11     ` [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver Bing Zhao
2020-07-17  7:11       ` [dpdk-dev] [PATCH v4 1/7] net/mlx5: add flow validation of eCPRI header Bing Zhao
2020-07-17  7:11       ` [dpdk-dev] [PATCH v4 2/7] net/mlx5: add flow translation " Bing Zhao
2020-07-17  7:11       ` [dpdk-dev] [PATCH v4 3/7] common/mlx5: add flex parser DevX structures Bing Zhao
2020-07-17  7:11       ` [dpdk-dev] [PATCH v4 4/7] common/mlx5: adding DevX command for flex parsers Bing Zhao
2020-07-17  7:11       ` [dpdk-dev] [PATCH v4 5/7] net/mlx5: create and destroy eCPRI flex parser Bing Zhao
2020-07-17  7:11       ` [dpdk-dev] [PATCH v4 6/7] net/mlx5: add eCPRI flex parser capacity check Bing Zhao
2020-07-17  7:11       ` [dpdk-dev] [PATCH v4 7/7] doc: update release notes and guides for eCPRI Bing Zhao
2020-07-17 12:55       ` [dpdk-dev] [PATCH v4 0/7] add eCPRI support in mlx5 driver Raslan Darawsheh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1594219387-240274-5-git-send-email-bingz@mellanox.com \
    --to=bingz@mellanox.com \
    --cc=dev@dpdk.org \
    --cc=matan@mellanox.com \
    --cc=netanelg@mellanox.com \
    --cc=netanelg@r-arch-host16.mtr.labs.mlnx \
    --cc=orika@mellanox.com \
    --cc=rasland@mellanox.com \
    --cc=viacheslavo@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

DPDK patches and discussions

This inbox may be cloned and mirrored by anyone:

	git clone --mirror https://inbox.dpdk.org/dev/0 dev/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dev dev/ https://inbox.dpdk.org/dev \
		dev@dpdk.org
	public-inbox-index dev

Example config snippet for mirrors.
Newsgroup available over NNTP:
	nntp://inbox.dpdk.org/inbox.dpdk.dev


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git