DPDK patches and discussions
 help / color / mirror / Atom feed
From: Hemant Agrawal <hemant.agrawal@nxp.com>
To: dev@dpdk.org
Cc: ferruh.yigit@intel.com, Jun Yang <jun.yang@nxp.com>
Subject: [dpdk-dev] [PATCH v2 15/29] net/dpaa2: support dynamic flow control
Date: Tue,  7 Jul 2020 14:52:30 +0530	[thread overview]
Message-ID: <20200707092244.12791-16-hemant.agrawal@nxp.com> (raw)
In-Reply-To: <20200707092244.12791-1-hemant.agrawal@nxp.com>

From: Jun Yang <jun.yang@nxp.com>

Dynamic flow used instead of layout defined.

The actual key/mask size depends on protocols and(or) fields
of patterns specified.
Also, the key and mask should start from the beginning of IOVA.

Signed-off-by: Jun Yang <jun.yang@nxp.com>
---
 doc/guides/nics/features/dpaa2.ini     |   1 +
 doc/guides/rel_notes/release_20_08.rst |   1 +
 drivers/net/dpaa2/dpaa2_flow.c         | 146 ++++++-------------------
 3 files changed, 36 insertions(+), 112 deletions(-)

diff --git a/doc/guides/nics/features/dpaa2.ini b/doc/guides/nics/features/dpaa2.ini
index c2214fbd5..3685e2e02 100644
--- a/doc/guides/nics/features/dpaa2.ini
+++ b/doc/guides/nics/features/dpaa2.ini
@@ -16,6 +16,7 @@ Unicast MAC filter   = Y
 RSS hash             = Y
 VLAN filter          = Y
 Flow control         = Y
+Flow API             = Y
 VLAN offload         = Y
 L3 checksum offload  = Y
 L4 checksum offload  = Y
diff --git a/doc/guides/rel_notes/release_20_08.rst b/doc/guides/rel_notes/release_20_08.rst
index e5bc5cfd8..97267f7b7 100644
--- a/doc/guides/rel_notes/release_20_08.rst
+++ b/doc/guides/rel_notes/release_20_08.rst
@@ -131,6 +131,7 @@ New Features
   Updated the NXP dpaa2 ethdev  with new features and improvements, including:
 
   * Added support to use datapath APIs from non-EAL pthread
+  * Added support for dynamic flow management
 
 Removed Items
 -------------
diff --git a/drivers/net/dpaa2/dpaa2_flow.c b/drivers/net/dpaa2/dpaa2_flow.c
index 8aa65db30..05d115c78 100644
--- a/drivers/net/dpaa2/dpaa2_flow.c
+++ b/drivers/net/dpaa2/dpaa2_flow.c
@@ -33,29 +33,6 @@ struct rte_flow {
 	uint16_t flow_id;
 };
 
-/* Layout for rule compositions for supported patterns */
-/* TODO: Current design only supports Ethernet + IPv4 based classification. */
-/* So corresponding offset macros are valid only. Rest are placeholder for */
-/* now. Once support for other netwrok headers will be added then */
-/* corresponding macros will be updated with correct values*/
-#define DPAA2_CLS_RULE_OFFSET_ETH	0	/*Start of buffer*/
-#define DPAA2_CLS_RULE_OFFSET_VLAN	14	/* DPAA2_CLS_RULE_OFFSET_ETH */
-						/*	+ Sizeof Eth fields  */
-#define DPAA2_CLS_RULE_OFFSET_IPV4	14	/* DPAA2_CLS_RULE_OFFSET_VLAN */
-						/*	+ Sizeof VLAN fields */
-#define DPAA2_CLS_RULE_OFFSET_IPV6	25	/* DPAA2_CLS_RULE_OFFSET_IPV4 */
-						/*	+ Sizeof IPV4 fields */
-#define DPAA2_CLS_RULE_OFFSET_ICMP	58	/* DPAA2_CLS_RULE_OFFSET_IPV6 */
-						/*	+ Sizeof IPV6 fields */
-#define DPAA2_CLS_RULE_OFFSET_UDP	60	/* DPAA2_CLS_RULE_OFFSET_ICMP */
-						/*	+ Sizeof ICMP fields */
-#define DPAA2_CLS_RULE_OFFSET_TCP	64	/* DPAA2_CLS_RULE_OFFSET_UDP  */
-						/*	+ Sizeof UDP fields  */
-#define DPAA2_CLS_RULE_OFFSET_SCTP	68	/* DPAA2_CLS_RULE_OFFSET_TCP  */
-						/*	+ Sizeof TCP fields  */
-#define DPAA2_CLS_RULE_OFFSET_GRE	72	/* DPAA2_CLS_RULE_OFFSET_SCTP */
-						/*	+ Sizeof SCTP fields */
-
 static const
 enum rte_flow_item_type dpaa2_supported_pattern_type[] = {
 	RTE_FLOW_ITEM_TYPE_END,
@@ -212,7 +189,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 			(pattern->mask ? pattern->mask : default_mask);
 
 	/* Key rule */
-	key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_ETH;
+	key_iova = flow->rule.key_iova + flow->key_size;
 	memcpy((void *)key_iova, (const void *)(spec->src.addr_bytes),
 						sizeof(struct rte_ether_addr));
 	key_iova += sizeof(struct rte_ether_addr);
@@ -223,7 +200,7 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 						sizeof(rte_be16_t));
 
 	/* Key mask */
-	mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_ETH;
+	mask_iova = flow->rule.mask_iova + flow->key_size;
 	memcpy((void *)mask_iova, (const void *)(mask->src.addr_bytes),
 						sizeof(struct rte_ether_addr));
 	mask_iova += sizeof(struct rte_ether_addr);
@@ -233,9 +210,9 @@ dpaa2_configure_flow_eth(struct rte_flow *flow,
 	memcpy((void *)mask_iova, (const void *)(&mask->type),
 						sizeof(rte_be16_t));
 
-	flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_ETH +
-				((2  * sizeof(struct rte_ether_addr)) +
-				sizeof(rte_be16_t)));
+	flow->key_size += ((2  * sizeof(struct rte_ether_addr)) +
+					sizeof(rte_be16_t));
+
 	return device_configured;
 }
 
@@ -335,15 +312,15 @@ dpaa2_configure_flow_vlan(struct rte_flow *flow,
 	mask	= (const struct rte_flow_item_vlan *)
 			(pattern->mask ? pattern->mask : default_mask);
 
-	key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_VLAN;
+	key_iova = flow->rule.key_iova + flow->key_size;
 	memcpy((void *)key_iova, (const void *)(&spec->tci),
 							sizeof(rte_be16_t));
 
-	mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_VLAN;
+	mask_iova = flow->rule.mask_iova + flow->key_size;
 	memcpy((void *)mask_iova, (const void *)(&mask->tci),
 							sizeof(rte_be16_t));
 
-	flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_VLAN + sizeof(rte_be16_t));
+	flow->key_size += sizeof(rte_be16_t);
 	return device_configured;
 }
 
@@ -474,7 +451,7 @@ dpaa2_configure_flow_ipv4(struct rte_flow *flow,
 	mask	= (const struct rte_flow_item_ipv4 *)
 			(pattern->mask ? pattern->mask : default_mask);
 
-	key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV4;
+	key_iova = flow->rule.key_iova + flow->key_size;
 	memcpy((void *)key_iova, (const void *)&spec->hdr.src_addr,
 							sizeof(uint32_t));
 	key_iova += sizeof(uint32_t);
@@ -484,7 +461,7 @@ dpaa2_configure_flow_ipv4(struct rte_flow *flow,
 	memcpy((void *)key_iova, (const void *)&spec->hdr.next_proto_id,
 							sizeof(uint8_t));
 
-	mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_IPV4;
+	mask_iova = flow->rule.mask_iova + flow->key_size;
 	memcpy((void *)mask_iova, (const void *)&mask->hdr.src_addr,
 							sizeof(uint32_t));
 	mask_iova += sizeof(uint32_t);
@@ -494,9 +471,7 @@ dpaa2_configure_flow_ipv4(struct rte_flow *flow,
 	memcpy((void *)mask_iova, (const void *)&mask->hdr.next_proto_id,
 							sizeof(uint8_t));
 
-	flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_IPV4 +
-				(2 * sizeof(uint32_t)) + sizeof(uint8_t));
-
+	flow->key_size += (2 * sizeof(uint32_t)) + sizeof(uint8_t);
 	return device_configured;
 }
 
@@ -613,23 +588,22 @@ dpaa2_configure_flow_ipv6(struct rte_flow *flow,
 	mask	= (const struct rte_flow_item_ipv6 *)
 			(pattern->mask ? pattern->mask : default_mask);
 
-	key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV6;
+	key_iova = flow->rule.key_iova + flow->key_size;
 	memcpy((void *)key_iova, (const void *)(spec->hdr.src_addr),
 						sizeof(spec->hdr.src_addr));
 	key_iova += sizeof(spec->hdr.src_addr);
 	memcpy((void *)key_iova, (const void *)(spec->hdr.dst_addr),
 						sizeof(spec->hdr.dst_addr));
 
-	mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_IPV6;
+	mask_iova = flow->rule.mask_iova + flow->key_size;
 	memcpy((void *)mask_iova, (const void *)(mask->hdr.src_addr),
 						sizeof(mask->hdr.src_addr));
 	mask_iova += sizeof(mask->hdr.src_addr);
 	memcpy((void *)mask_iova, (const void *)(mask->hdr.dst_addr),
 						sizeof(mask->hdr.dst_addr));
 
-	flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_IPV6 +
-					sizeof(spec->hdr.src_addr) +
-					sizeof(mask->hdr.dst_addr));
+	flow->key_size += sizeof(spec->hdr.src_addr) +
+					sizeof(mask->hdr.dst_addr);
 	return device_configured;
 }
 
@@ -746,22 +720,21 @@ dpaa2_configure_flow_icmp(struct rte_flow *flow,
 	mask	= (const struct rte_flow_item_icmp *)
 			(pattern->mask ? pattern->mask : default_mask);
 
-	key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_ICMP;
+	key_iova = flow->rule.key_iova + flow->key_size;
 	memcpy((void *)key_iova, (const void *)&spec->hdr.icmp_type,
 							sizeof(uint8_t));
 	key_iova += sizeof(uint8_t);
 	memcpy((void *)key_iova, (const void *)&spec->hdr.icmp_code,
 							sizeof(uint8_t));
 
-	mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_ICMP;
+	mask_iova = flow->rule.mask_iova + flow->key_size;
 	memcpy((void *)mask_iova, (const void *)&mask->hdr.icmp_type,
 							sizeof(uint8_t));
 	key_iova += sizeof(uint8_t);
 	memcpy((void *)mask_iova, (const void *)&mask->hdr.icmp_code,
 							sizeof(uint8_t));
 
-	flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_ICMP +
-				(2 * sizeof(uint8_t)));
+	flow->key_size += 2 * sizeof(uint8_t);
 
 	return device_configured;
 }
@@ -837,13 +810,6 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
 
 	if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
 		index = priv->extract.qos_key_cfg.num_extracts;
-		priv->extract.qos_key_cfg.extracts[index].type =
-							DPKG_EXTRACT_FROM_HDR;
-		priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
-		priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
-		priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
-		index++;
-
 		priv->extract.qos_key_cfg.extracts[index].type =
 							DPKG_EXTRACT_FROM_HDR;
 		priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -862,13 +828,6 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
 
 	if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
 		index = priv->extract.fs_key_cfg[group].num_extracts;
-		priv->extract.fs_key_cfg[group].extracts[index].type =
-							DPKG_EXTRACT_FROM_HDR;
-		priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
-		priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
-		priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
-		index++;
-
 		priv->extract.fs_key_cfg[group].extracts[index].type =
 							DPKG_EXTRACT_FROM_HDR;
 		priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -892,25 +851,21 @@ dpaa2_configure_flow_udp(struct rte_flow *flow,
 	mask	= (const struct rte_flow_item_udp *)
 			(pattern->mask ? pattern->mask : default_mask);
 
-	key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV4 +
-					(2 * sizeof(uint32_t));
-	memset((void *)key_iova, 0x11, sizeof(uint8_t));
-	key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_UDP;
+	key_iova = flow->rule.key_iova + flow->key_size;
 	memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port),
 							sizeof(uint16_t));
 	key_iova +=  sizeof(uint16_t);
 	memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port),
 							sizeof(uint16_t));
 
-	mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_UDP;
+	mask_iova = flow->rule.mask_iova + flow->key_size;
 	memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port),
 							sizeof(uint16_t));
 	mask_iova +=  sizeof(uint16_t);
 	memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port),
 							sizeof(uint16_t));
 
-	flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_UDP +
-				(2 * sizeof(uint16_t)));
+	flow->key_size += (2 * sizeof(uint16_t));
 
 	return device_configured;
 }
@@ -986,13 +941,6 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
 
 	if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
 		index = priv->extract.qos_key_cfg.num_extracts;
-		priv->extract.qos_key_cfg.extracts[index].type =
-							DPKG_EXTRACT_FROM_HDR;
-		priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
-		priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
-		priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
-		index++;
-
 		priv->extract.qos_key_cfg.extracts[index].type =
 							DPKG_EXTRACT_FROM_HDR;
 		priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -1012,13 +960,6 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
 
 	if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
 		index = priv->extract.fs_key_cfg[group].num_extracts;
-		priv->extract.fs_key_cfg[group].extracts[index].type =
-							DPKG_EXTRACT_FROM_HDR;
-		priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
-		priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
-		priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
-		index++;
-
 		priv->extract.fs_key_cfg[group].extracts[index].type =
 							DPKG_EXTRACT_FROM_HDR;
 		priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -1042,25 +983,21 @@ dpaa2_configure_flow_tcp(struct rte_flow *flow,
 	mask	= (const struct rte_flow_item_tcp *)
 			(pattern->mask ? pattern->mask : default_mask);
 
-	key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV4 +
-					(2 * sizeof(uint32_t));
-	memset((void *)key_iova, 0x06, sizeof(uint8_t));
-	key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_TCP;
+	key_iova = flow->rule.key_iova + flow->key_size;
 	memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port),
 							sizeof(uint16_t));
 	key_iova += sizeof(uint16_t);
 	memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port),
 							sizeof(uint16_t));
 
-	mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_TCP;
+	mask_iova = flow->rule.mask_iova + flow->key_size;
 	memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port),
 							sizeof(uint16_t));
 	mask_iova += sizeof(uint16_t);
 	memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port),
 							sizeof(uint16_t));
 
-	flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_TCP +
-				(2 * sizeof(uint16_t)));
+	flow->key_size += 2 * sizeof(uint16_t);
 
 	return device_configured;
 }
@@ -1136,13 +1073,6 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
 
 	if (device_configured & DPAA2_QOS_TABLE_RECONFIGURE) {
 		index = priv->extract.qos_key_cfg.num_extracts;
-		priv->extract.qos_key_cfg.extracts[index].type =
-							DPKG_EXTRACT_FROM_HDR;
-		priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
-		priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.prot = NET_PROT_IP;
-		priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
-		index++;
-
 		priv->extract.qos_key_cfg.extracts[index].type =
 							DPKG_EXTRACT_FROM_HDR;
 		priv->extract.qos_key_cfg.extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -1162,13 +1092,6 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
 
 	if (device_configured & DPAA2_FS_TABLE_RECONFIGURE) {
 		index = priv->extract.fs_key_cfg[group].num_extracts;
-		priv->extract.fs_key_cfg[group].extracts[index].type =
-							DPKG_EXTRACT_FROM_HDR;
-		priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
-		priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.prot = NET_PROT_IP;
-		priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.field = NH_FLD_IP_PROTO;
-		index++;
-
 		priv->extract.fs_key_cfg[group].extracts[index].type =
 							DPKG_EXTRACT_FROM_HDR;
 		priv->extract.fs_key_cfg[group].extracts[index].extract.from_hdr.type = DPKG_FULL_FIELD;
@@ -1192,25 +1115,22 @@ dpaa2_configure_flow_sctp(struct rte_flow *flow,
 	mask	= (const struct rte_flow_item_sctp *)
 			(pattern->mask ? pattern->mask : default_mask);
 
-	key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_IPV4 +
-						(2 * sizeof(uint32_t));
-	memset((void *)key_iova, 0x84, sizeof(uint8_t));
-	key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_SCTP;
+	key_iova = flow->rule.key_iova + flow->key_size;
 	memcpy((void *)key_iova, (const void *)(&spec->hdr.src_port),
 							sizeof(uint16_t));
 	key_iova += sizeof(uint16_t);
 	memcpy((void *)key_iova, (const void *)(&spec->hdr.dst_port),
 							sizeof(uint16_t));
 
-	mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_SCTP;
+	mask_iova = flow->rule.mask_iova + flow->key_size;
 	memcpy((void *)mask_iova, (const void *)(&mask->hdr.src_port),
 							sizeof(uint16_t));
 	mask_iova += sizeof(uint16_t);
 	memcpy((void *)mask_iova, (const void *)(&mask->hdr.dst_port),
 							sizeof(uint16_t));
 
-	flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_SCTP +
-				(2 * sizeof(uint16_t)));
+	flow->key_size += 2 * sizeof(uint16_t);
+
 	return device_configured;
 }
 
@@ -1313,15 +1233,15 @@ dpaa2_configure_flow_gre(struct rte_flow *flow,
 	mask	= (const struct rte_flow_item_gre *)
 			(pattern->mask ? pattern->mask : default_mask);
 
-	key_iova = flow->rule.key_iova + DPAA2_CLS_RULE_OFFSET_GRE;
+	key_iova = flow->rule.key_iova + flow->key_size;
 	memcpy((void *)key_iova, (const void *)(&spec->protocol),
 							sizeof(rte_be16_t));
 
-	mask_iova = flow->rule.mask_iova + DPAA2_CLS_RULE_OFFSET_GRE;
+	mask_iova = flow->rule.mask_iova + flow->key_size;
 	memcpy((void *)mask_iova, (const void *)(&mask->protocol),
 							sizeof(rte_be16_t));
 
-	flow->rule.key_size = (DPAA2_CLS_RULE_OFFSET_GRE + sizeof(rte_be16_t));
+	flow->key_size += sizeof(rte_be16_t);
 
 	return device_configured;
 }
@@ -1503,6 +1423,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 
 			action.flow_id = action.flow_id % nic_attr.num_rx_tcs;
 			index = flow->index + (flow->tc_id * nic_attr.fs_entries);
+			flow->rule.key_size = flow->key_size;
 			ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW,
 						priv->token, &flow->rule,
 						flow->tc_id, index,
@@ -1606,6 +1527,7 @@ dpaa2_generic_flow_set(struct rte_flow *flow,
 
 			/* Add Rule into QoS table */
 			index = flow->index + (flow->tc_id * nic_attr.fs_entries);
+			flow->rule.key_size = flow->key_size;
 			ret = dpni_add_qos_entry(dpni, CMD_PRI_LOW, priv->token,
 						&flow->rule, flow->tc_id,
 						index, 0, 0);
@@ -1862,7 +1784,7 @@ struct rte_flow *dpaa2_flow_create(struct rte_eth_dev *dev,
 
 	flow->rule.key_iova = key_iova;
 	flow->rule.mask_iova = mask_iova;
-	flow->rule.key_size = 0;
+	flow->key_size = 0;
 
 	switch (dpaa2_filter_type) {
 	case RTE_ETH_FILTER_GENERIC:
-- 
2.17.1


  parent reply	other threads:[~2020-07-07  9:29 UTC|newest]

Thread overview: 83+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-27 13:22 [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 01/37] bus/fslmc: fix getting the FD error Hemant Agrawal
2020-05-27 18:07   ` Akhil Goyal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 02/37] net/dpaa: fix fd offset data type Hemant Agrawal
2020-05-27 18:08   ` Akhil Goyal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 03/37] net/dpaa2: enable timestamp for Rx offload case as well Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 04/37] bus/fslmc: combine thread specific variables Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 05/37] bus/fslmc: rework portal allocation to a per thread basis Hemant Agrawal
2020-07-01  7:23   ` Ferruh Yigit
2020-05-27 13:22 ` [dpdk-dev] [PATCH 06/37] bus/fslmc: support handle portal alloc failure Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 07/37] bus/fslmc: support portal migration Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 08/37] bus/fslmc: rename the cinh read functions used for ls1088 Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 09/37] net/dpaa: enable Tx queue taildrop Hemant Agrawal
2020-05-27 13:22 ` [dpdk-dev] [PATCH 10/37] net/dpaa: add 2.5G support Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 11/37] net/dpaa: update process specific device info Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 12/37] drivers: optimize thread local storage for dpaa Hemant Agrawal
2020-05-27 18:13   ` Akhil Goyal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 13/37] bus/dpaa: enable link state interrupt Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 14/37] bus/dpaa: enable set link status Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 15/37] net/dpaa: add support for fmlib in dpdk Hemant Agrawal
2020-06-30 17:00   ` Ferruh Yigit
2020-07-01  4:18     ` Hemant Agrawal
2020-07-01  7:35       ` Ferruh Yigit
2020-05-27 13:23 ` [dpdk-dev] [PATCH 16/37] net/dpaa: add VSP support in FMLIB Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 17/37] net/dpaa: add support for fmcless mode Hemant Agrawal
2020-06-30 17:01   ` Ferruh Yigit
2020-07-01  4:04     ` Hemant Agrawal
2020-07-01  7:37       ` Ferruh Yigit
2020-05-27 13:23 ` [dpdk-dev] [PATCH 18/37] bus/dpaa: add shared MAC support Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 19/37] bus/dpaa: add Virtual Storage Profile port init Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 20/37] net/dpaa: add support for Virtual Storage Profile Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 21/37] net/dpaa: add fmc parser support for VSP Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 22/37] net/dpaa: add RSS update func with FMCless Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 23/37] net/dpaa2: dynamic flow control support Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 24/37] net/dpaa2: key extracts of flow API Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 25/37] net/dpaa2: sanity check for flow extracts Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 26/37] net/dpaa2: free flow rule memory Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 27/37] net/dpaa2: flow QoS or FS table entry indexing Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 28/37] net/dpaa2: define the size of table entry Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 29/37] net/dpaa2: log of flow extracts and rules Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 30/37] net/dpaa2: discrimination between IPv4 and IPv6 Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 31/37] net/dpaa2: distribution size set on multiple TCs Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 32/37] net/dpaa2: index of queue action for flow Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 33/37] net/dpaa2: flow data sanity check Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 34/37] net/dpaa2: flow API QoS setup follows FS setup Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 35/37] net/dpaa2: flow API FS miss action configuration Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 36/37] net/dpaa2: configure per class distribution size Hemant Agrawal
2020-05-27 13:23 ` [dpdk-dev] [PATCH 37/37] net/dpaa2: support raw flow classification Hemant Agrawal
2020-06-30 17:01 ` [dpdk-dev] [PATCH 00/37] NXP DPAAx enhancements Ferruh Yigit
2020-07-01  4:08   ` Hemant Agrawal
2020-07-07  9:22 ` [dpdk-dev] [PATCH v2 00/29] " Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 01/29] bus/fslmc: fix getting the FD error Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 02/29] net/dpaa: fix fd offset data type Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 03/29] net/dpaa2: enable timestamp for Rx offload case as well Hemant Agrawal
2020-07-11 13:46     ` Thomas Monjalon
2020-07-13  3:47       ` Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 04/29] bus/fslmc: combine thread specific variables Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 05/29] bus/fslmc: rework portal allocation to a per thread basis Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 06/29] bus/fslmc: support handle portal alloc failure Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 07/29] bus/fslmc: support portal migration Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 08/29] bus/fslmc: rename the cinh read functions used for ls1088 Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 09/29] net/dpaa: enable Tx queue taildrop Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 10/29] net/dpaa: add 2.5G support Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 11/29] net/dpaa: update process specific device info Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 12/29] drivers: optimize thread local storage for dpaa Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 13/29] bus/dpaa: enable link state interrupt Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 14/29] bus/dpaa: enable set link status Hemant Agrawal
2020-07-07  9:22   ` Hemant Agrawal [this message]
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 16/29] net/dpaa2: support key extracts of flow API Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 17/29] net/dpaa2: add sanity check for flow extracts Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 18/29] net/dpaa2: free flow rule memory Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 19/29] net/dpaa2: support QoS or FS table entry indexing Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 20/29] net/dpaa2: define the size of table entry Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 21/29] net/dpaa2: add logging of flow extracts and rules Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 22/29] net/dpaa2: support iscrimination between IPv4 and IPv6 Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 23/29] net/dpaa2: support distribution size set on multiple TCs Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 24/29] net/dpaa2: support ndex of queue action for flow Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 25/29] net/dpaa2: add flow data sanity check Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 26/29] net/dpaa2: modify flow API QoS setup to follow FS setup Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 27/29] net/dpaa2: support flow API FS miss action configuration Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 28/29] net/dpaa2: configure per class distribution size Hemant Agrawal
2020-07-07  9:22   ` [dpdk-dev] [PATCH v2 29/29] net/dpaa2: support raw flow classification Hemant Agrawal
2020-07-09  1:54   ` [dpdk-dev] [PATCH v2 00/29] NXP DPAAx enhancements Ferruh Yigit

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200707092244.12791-16-hemant.agrawal@nxp.com \
    --to=hemant.agrawal@nxp.com \
    --cc=dev@dpdk.org \
    --cc=ferruh.yigit@intel.com \
    --cc=jun.yang@nxp.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).